Search Results

Search found 11277 results on 452 pages for 'jeff certain'.

Page 239/452 | < Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >

  • Is my use case diagram correct?

    - by Dummy Derp
    NOTE: I am self studying UML so I have nobody to verify my diagrams and hence I am posting here, so please bear with me. This is the problem I got from some PDF available on Google that simply had the following problem statement: Problem Statement: A library contains books and journals. The task is to develop a computer system for borrowing books. In order to borrow a book the borrower must be a member of the library. There is a limit on the number of books that can be borrowed by each member of the library. The library may have several copies of a given book. It is possible to reserve a book. Some books are for short term loans only. Other books may be borrowed for 3 weeks. Users can extend the loans. 1. Draw a use case diagram for a library. 2. Give a use case description for two use cases: • Borrow copy of book • Extend loan Diagram: Use case description: 1. Borrow a copy of the book: If the person wishes to borrow a book from Derpville Public Library, he/she must be a member of the library in which case they will be allowed to issue a certain number of books. If the person is not a member, the book will not be issued to them for taking away, rather they will have to sit and read in the library. 2. Extending loan: Some books will be lent for 3 weeks while others will be lent for more than 3 weeks in which case the person borrowing has to come to the library and get the date extended. There is a limit on how much the user can extend the date of a particular book.

    Read the article

  • Trying to learn how to use WCF services in a WPF app, using MVVM

    - by Rod
    We're working on a major re-write of a legacy VB6 app, into a WPF app. I've written several WCF services, which are meant to be used with the new WPF app. We want to use the MVVM design pattern to do this, but we don't have experience at that. So, in order to learn MVVM we've watched a video on WindowsClient called How Do I: Build Data-driven WPF Application using the MVVM pattern. This is a great introduction, and we refer to it a lot, but for our situation it doesn't quite give us enough. For example, we're not certain how to use datasets returned by my WCF services in our new WPF app using the ideas that Todd Miranda introduced in the video I referenced. If we did as we think we're supposed to do, then we should design a class that is exactly like the class of data returned in my WCF service. But we're wondering, why do that, when the WCF service already has such a class? And yet, the class in the WPF app has to at least implement the INotifyPropertyChanged interface. So, we're not sure what to do.

    Read the article

  • SQL Azure Security: DoS

    - by Herve Roggero
    Since I decided to understand in more depth how SQL Azure works I started to dig into its performance characteristics. So I decided to write an application that allows me to put SQL Azure to the test and compare results with a local SQL Server database. One of the options I added is the ability to issue the same command on multiple threads to get certain performance metrics. That's when I stumbled on an interesting security feature of SQL Azure: its Denial of Service (DoS) detection engine. What this security feature does is that it performs a check on the number of connections being established, and if the rate of connection is too high, SQL Azure blocks all communication from that machine. I am still trying to learn more about this specific feature, but it appears that going to the SQL Azure portal and testing the connection from the portal "resets" the feature and you are allowed to connect again... until you reach the login threashold. In the specific test I was performing, all the logins were successful. I haven't tried to login with an invalid account or password... that will be for next time. On my Linked In group (SQL Server and SQL Azure Security: http://www.linkedin.com/groups?gid=2569994&trk=hb_side_g) Chip Andrews (www.sqlsecurity.com) pointed out that this feature in itself could present an internal threat. In theory, a rogue application could be issuing many login requests from a NATed network, which could potentially prevent any production system from connecting to SQL Azure within the same network. My initial response was that this could indeed be the case. However, while the TCP protocol contains the latest NATed IP address of a machine (which masks the origin of the machine making the SQL request), the TDS protocol itself contains the IP Address of the machine making the initial request; so technically there would be a way for SQL Azure to block only the internal IP address making the rogue requests.  So this warrants further investigation... stay tuned...

    Read the article

  • Desktop Fun: Waterfalls Theme Wallpapers

    - by Asian Angel
    Do waterfalls remind you of exotic locations or peaceful settings far away from everyday stress? Then you will definitely want to have a look through our Waterfalls Theme Wallpaper collection. Note: Click on the picture to see the full-size image—these wallpapers vary in size so you may need to crop, stretch, or place them on a colored background in order to best match them to your screen’s resolution. For more fun wallpapers be certain to visit our new Desktop Fun section. Similar Articles Productive Geek Tips Windows 7 Welcome Screen Taking Forever? Here’s the Fix (Maybe)Desktop Fun: Starship Theme WallpapersDesktop Fun: Underwater Theme WallpapersDesktop Fun: Forest Theme WallpapersDesktop Fun: Fantasy Theme Wallpapers TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere) 10 Superb Firefox Wallpapers OpenDNS Guide Google TV

    Read the article

  • TechEd Europe early bird saving &ndash; register by 5th July

    - by Eric Nelson
    Another event advert alert :-) But this one comes with a cautious warning. I spoke at TechEd Europe last year. I found TechEd to be a huge, extremely well run conference filled with great speakers and passionate attendees in a top notch venue and fascinating city. As an “IT Pro” I think it is the premiere conference for Microsoft technologies in Europe. However, IMHO and those of others I trust, I didn’t think it hit the mark for developers in 2009. There was a fairly obvious reason – the PDC was scheduled to take place only a couple of weeks later which meant the “powder was being kept dry” and (IMHO) some of the best speakers on developer technologies were elsewhere. But I’m reasonably certain that this won’t be repeated this year (Err… Have I missed an announcement about “no pdc in 2010”?) Enjoy: Register for Tech·Ed Europe by 5 July and Save €500 Tech·Ed Europe returns to Berlin this November 8 – 12, for a full week of deep technical education, hands-on-learning and opportunities to connect with Microsoft and Community experts one-on-one.  Register by 5 July and receive your conference pass for only €1,395 – a €500 savings. Arrive Early and Get a Jumpstart on Technical Sessions Choose from 8 pre-conference seminars led by Microsoft and industry experts, and selected to give you a jumpstart on technical learning.  Additional fees apply.  Conference attendees receive a €100 discount.   Join the Tech·Ed Europe Email List for Event Updates Get the latest event news before the event, and find out more about what’s happening onsite.  Join the Tech·Ed Europe email list today!

    Read the article

  • Dealing with selfish team member(s)

    - by thegreendroid
    My team is facing a difficult quandary, a couple of team members are essentially selfish (not to be confused with dominant!) and are cherry-picking stories/tasks that will give them the most recognition within the company (at sprint reviews etc. when all the stakeholders are present). These team members are very good at what they do and are fully aware of what they are doing. When we first started using agile about a year ago, I can say I was quite selfish too (coming from a very individual-focused past). I took ownership of certain stories and didn't involve anyone else in it, which in hindsight wasn't the right thing to do and I learnt from that experience almost immediately. We are a young team of very ambitious twenty somethings so I can understand the selfishness to some extent (after all everyone should be ambitious!). But the level to which this selfishness has reached of late has started to bother me and a few others within my team. The way I see it, agile/scrum is all about the team and not individuals. We should be looking out for each other and helping each other improve. I made this quite clear during our last retrospective, that we should be fair and give everyone a chance. I'll wait and see what comes out of it in the next few sprints. In the meantime, what are some of the troubles that you have faced with selfish members and how did you overcome them?

    Read the article

  • Can I override fonts installed by ttf-mscorefonts-installer, prefer Liberation fonts?

    - by conner_bw
    I had to apt-get install ttf-mscorefonts-installer on Ubuntu 12.04/12.10. The short version is I need to pipe PDF files out of an application that requires these fonts for certain glyphs. The problem, after running this command, is that the fonts in my web browser (and some java apps) are now "ugly." Obviously this is a subjective opinion but it is the one I hold. I want the old fonts back for most cases (Liberation, DejaVu, Ubuntu, ...). I'm not sure how best to describe this but here's an example: Example CSS in Webbrowser font-family: Verdana,Arial,sans-serif; Without ttf-mscorefonts-installer (Case 1): $ fc-match Verdana LiberationSans-Regular.ttf: "Liberation Sans" "Regular" $ fc-match Arial LiberationSans-Regular.ttf: "Liberation Sans" "Regular" $ fc-match sans-serif LiberationSans-Regular.ttf: "Liberation Sans" "Regular"` With ttf-mscorefonts-installer (Case 2): $ fc-match Verdana Verdana.ttf: "Verdana" "Normal" $ fc-match Arial Arial.ttf: "Arial" "Normal" $ fc-match sans-serif LiberationSans-Regular.ttf: "Liberation Sans" "Regular"` I want (Case 1). Optionally, I want the fonts in (Case 2) not to look "ugly" IE. they are more jagged, less smooth than their free alternatives in my web browsers. Is this possible?

    Read the article

  • Career paths after web development?

    - by Mike
    I know this is open ended, but I'm just curious what you've done after your web development career, or if you've stayed loyal. I have a feeling/read/heard that web development salaries top out at a certain amount.. even after 10-15 years of experience. Reason I ask is that I graduated last summer with a BS in Chemical Engineering.. but have not been able to find a job in California. I've been web designing/developing since high school and thought that I should start a career, even if its not related to my major and not lose more time. Even though I'd really like to have an engineering career, I don't think that will happen. Do you guys have any suggestions or experiences for choices after/ways to enhance your career after several years in web development? Thanks! Update: Thanks for the responses guys! One more question: Is it likely to be accepted into a MS/PhD program if you've been out of uni for a couple years? Or with semi-related job experience? Would I be a bit of a misfit with a BS in ChemE studying CS/CompE for an MS?

    Read the article

  • Conditional attribute in XML - most concise solution?

    - by Lech Rzedzicki
    I am tasked with setting up conditional profiling - a method of tagging chunks of XML with an attribute, which will then be used as a conditional value to extract subset of that XML. Have a look at another definition/example: DITA profiling The XML is documents that are equivalent to printed books - i.e. documents that are often looked at by a human, even if indirectly. Therefore I am looking at a few requirements here: 1. keeping the value list brief - so it doesn't affect the readability of the document 2. be able to process with standard XML tools - a space-separated list inside an attribute is still probably fine, but I'd rather not use too much regexp for this 3. be obvious for various users, including 3rd parties, which content goes where 4. Be easy to maintain going forward Therefore one easy solution is: The problem with this: 1. As the list grows the value of the attribute can be a bit verbose 2. One needs to explicitly state every value even if it's a scenario of this vs everything else Therefore I am also looking at other approaches such as: 1. Using + and - modifiers, Apache htaccess style to override the default cascading of profiling - by default all content goes everywhere and if we want to exclude a bit we just say "-kindle". It does require parsing the whole tree, is not supported by editing tools and one needs to regexp the attribute value a bit deeper... 2. Using an intermediate file to define groups of values such as "other" or "non-print", example of this in DITA. It allows concise XML as well as different grouping and values for each document but it does create a certain level of abstraction which may make it a little less obvious for a 3rd party? Altogether, if you received such XML and were tasked to process it, which option you'd rather receive? If you have any experiences like that, even in an unrelated areas such a builds, don't hesitate to comment!

    Read the article

  • How to slow down a sprite that updates every frame?

    - by xiaohouzi79
    I am going through a Allegro 5 tutorial which has a game loop. There is also a variable "active" which determines if a key is being held down. Thus if the left key is being held down active is on and it begins looping through the row on the sprite sheet that corresponds to moving left. The problem is that this logic is checked everytime the loop is performed thus at approximately 60 fps the three images that are used to do the left walking animation cycle round super fast which means my character looks like it is in a rush. Total beginner question: so what is the correct way to slow down the transition between sprites so that the walking looks like it is done at a moderate pace. Here is the code used to transition across the sprite between the three different phases of the person walking: if (active) { sourceX += al_get_bitmap_width(player) / 3; } else { sourceX = 32; } if (sourceX >= al_get_bitmap_width(player)) { sourceX = 0; } I can kind of guess what it should be in plain English: update sourceX only every certain part of a second but I can't think of how to put this into code.

    Read the article

  • Microsoft Terminology: .NET C++ vs. traditional C++

    - by Mike Clark
    I've recently been working with a team that's using both .NET C++ and pre-.NET C++. I fully understand the technical differences between the two technologies. However, I sometimes feel like I'm floundering when it comes to the terminology used to differentiate the two. Example: Say we have two projects: ProjectA contains "C++" code that builds a .NET assembly DLL. ProjectB contains Visual C++ code that builds a traditional native Windows DLL. What is the best way to succinctly and terminologically draw a distinction between the two projects? Again, I'm not asking for an in-depth technical description of the differences between the two technologies. I'm just looking for names and labels. This is how, today, I might try to make the distinction when talking to someone: "ProjectA is a managed .NET C++ project" and "ProjectB is an unmanaged native C++ DLL project." However I am not at all certain that this terminology is ideal, or even correct. Please describe what you feel the ideal language to use in this situation (or similar situations) might be. Feel free to motivate your answer.

    Read the article

  • Strategy for restoring state via URL in web apps

    - by JW01
    This is a question about modern web apps, where a single page is loaded, and all subsequent navigation is done by XHR calls and modifying the DOM. We can use libraries that manipulate the hash string, which let us navigate by URL and support the back/forward buttons. But to use those libraries, we need to be able to move the UI from any one state to any other. Is there a good strategy for moving between UI states, that also allows them to be restored from scratch when you load a new URL? In a complex app, you might have a lot of different states. You don't want to reload the entire UI each time you change states. But you also don't want to require separate methods for moving from every state to each every state. Typically we need to: Restore a state from scratch, when you enter a new URL or hit Reload. Move from one state to another, when you use the Back/Forward buttons. Move from one state to another, when you perform an action within your app (like clicking a link). Move to certain states that shouldn't be added to the history, like ones that appear after form submissions. Move to some states that are built on the previous state, like a drill-down list. When you perform actions within your app, there's the additional question of which comes first: Do you change the URL, listen for the URL change, and change your state in response to it? Or do you change your state, then change the URL, but don't do anything in response? Does anyone have some experience to share on this topic?

    Read the article

  • Designing a Database Application with OOP

    - by Tim C
    I often develop SQL database applications using Linq, and my methodology is to build model classes to represent each table, and each table that needs inserting or updating gets a Save() method (which either does an InsertOnSubmit() or SubmitChanges(), depending on the state of the object). Often, when I need to represent a collection of records, I'll create a class that inherits from a List-like object of the atomic class. ex. public class CustomerCollection : CoreCollection<Customer> { } Recently, I was working on an application where end-users were experiencing slowness, where each of the objects needed to be saved to the database if they met a certain criteria. My Save() method was slow, presumably because I was making all kinds of round-trips to the server, and calling DataContext.SubmitChanges() after each atomic save. So, the code might have looked something like this foreach(Customer c in customerCollection) { if(c.ShouldSave()) { c.Save(); } } I worked through multiple strategies to optimize, but ultimately settled on passing a big string of data to a SQL stored procedure, where the string has all the data that represents the records I was working with - it might look something like this: CustomerID:34567;CurrentAddress:23 3rd St;CustomerID:23456;CurrentAddress:123 4th St So, SQL server parses the string, performs the logic to determine appropriateness of save, and then Inserts, Updates, or Ignores. With C#/Linq doing this work, it saved 5-10 records / s. When SQL does it, I get 100 records / s, so there is no denying the Stored Proc is more efficient; however, I hate the solution because it doesn't seem nearly as clean or safe. My real concern is that I don't have any better solutions that hold a candle to the performance of the stored proc solution. Am I doing something obviously wrong in how I'm thinking about designing database applications? Are there better ways of designing database applications?

    Read the article

  • How to recover data from a failing hard drive?

    - by intuited
    An external 3½" HDD seems to be in danger of failing — it's making ticking sounds when idle. I've acquired a replacement drive, and want to know the best strategy to get the data off of the dubious drive with the best chance of saving as much as possible. There are some directories that are more important than others. However, I'm guessing that picking and choosing directories is going to reduce my chances of saving the whole thing. I would also have to mount it, dump a file listing, and then unmount it in order to be able to effectively prioritize directories. Adding in the fact that it's time-consuming to do this, I'm leaning away from this approach. I've considered just using dd, but I'm not sure how it would handle read errors or other problems that might prevent only certain parts of the data from being rescued, or which could be overcome with some retries, but not so many that they endanger other parts of the drive from being saved. I guess ideally it would do a single pass to get as much as possible and then go back to retry anything that was missed due to errors. Is it possible that copying more slowly — e.g. pausing every x MB/GB — would be better than just running the operation full tilt, for example to avoid any overheating issues? For the "where is your backup" crowd: this actually is my backup drive, but it also contains some non-critical and bulky stuff, like music, that aren't backups, i.e. aren't backed up. The drive has not exhibited any clear signs of failure other than this somewhat ominous sound. I did have to fsck a few errors recently — orphaned inodes, incorrect free blocks/inodes counts, inode bitmap differences, zero dtime on deleted inodes; about 20 errors in all. The filesystem of the partition is ext3.

    Read the article

  • Prevent Eclipse Java Builder from Compiling Java-Like Source

    - by redjamjar
    I'm in the process of writing an eclipse plugin for my programming language Whiley (see http://whiley.org). The plugin is working reasonably well, although there's lots to do. Two pieces of the jigsaw are: I've created a "Whiley Builder" by subclassing incremental project builder. This handles building and cleaning of "*.whiley" files. I've created a content-type called "Whiley Source Files" for "*.whiley" files, which extends "org.eclipse.jdt.core.javaSource" (this follows Andrew Eisenberg suggestion). The advantage of having the content-type extend javaSource is that it immediately fits into the package explorer, etc. In principle, I could fleshout ICompilationUnit to provide more useful info, although I haven't done that yet. The disadvantage is that the Java builder is trying to compile my whiley files ... and it obviously can't. Originally, I had the Java Builder run first, then the Whiley builder. Superficially, this actually worked out quite well since all of the errors from the Java Builder were discarded by the Whiley Builder (for whiley files). However, I actually want the Whiley Builder to run first, as this is the best way for me to resolve dependencies between Java and Whiley files. Which leads me to my question: can I stop the Java builder from trying to compile certain java-like resources? Specifically, in my case, those with the "*.whiley" extension. As an alternative, I was wondering whether my Whiley Builder could somehow update the resource delta to remove those files which it has dealt with. Thoughts?

    Read the article

  • Desktop Fun: Moody Skies Wallpapers

    - by Asian Angel
    The sky can personify a multitude of moods and emotions based on its’ appearance. Inspire your own thoughts and feelings with our Moody Skies Wallpaper collection. Note: Click on the picture to see the full-size image—these wallpapers vary in size so you may need to crop, stretch, or place them on a colored background in order to best match them to your screen’s resolution. For more wallpapers be certain to see our great collections in the Desktop Fun section. Similar Articles Productive Geek Tips Windows 7 Welcome Screen Taking Forever? Here’s the Fix (Maybe)Desktop Fun: Starship Theme WallpapersDesktop Fun: Underwater Theme WallpapersDesktop Fun: Starscape Theme WallpapersDesktop Fun: Fantasy Theme Wallpapers TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Download Songs From MySpace Steve Jobs’ iPhone 4 Keynote Video Watch World Cup Online On These Sites Speed Up Windows With ReadyBoost Awesome World Cup Soccer Calendar Nice Websites To Watch TV Shows Online

    Read the article

  • What are the licensing issues involved in the Oracle/Apache java dispute?

    - by Chris Knight
    I've just started following with interest the soap opera involving Oracle's acquisition of Java and the detriment of goodwill it seems to have generated in the open source community. Specifically, I'm now trying to get my head around the implications of Oracle's decision to refuse Apache an open source license for Harmony. My questions: 1) What is Harmony anyway? Their website states "Apache Harmony software is a modular Java runtime with class libraries and associated tools". How is this different than J2SE or J2EE? Or is Harmony akin to Andriod? 2) The crux of this issue is around the Java Technology Compatibility Kit (or TCK) which certifies that your implementation adheres to the JSR specifications. If I understand correctly, Oracle refuse to offer free or open source license access to the TCK, denying projects like Harmony from being released as open source. Why is this such a big deal for Apache? E.g. why can't (or don't) they release Harmony under a restricted license? 3) From this site is the following quote: It looks like Oracle’s plan is to restrict deployments of Java implementations in certain markets, particularly on mobile platforms, so that it can monetize its own Java offering in those markets without any competition. Presumably anything Oracle produced would be subject to the same restrictions it is imposing on others with respect to end-technology licensing, so how could they get a leg up on the competition? While no doubt distateful, wouldn't other competitors such as Google or Apache be able to release competing platforms under the same license as Oracle?

    Read the article

  • Different Flavors of Leases Back On

    - by Theresa Hickman
    Given the continued interest regarding the proposed changes to Lease Accounting, I decided to write another entry on this controversial topic with colorful commentary from our resident accounting expert, Seamus Moran. Background (A History Lesson) Back in 1976, the FASB issued FAS 13, “Accounting for Leases” that permitted leases to be either an operating lease or capital (finance) lease. In substance, operating leases are a form of off-balance sheet financing. According to Seamus, operating leases date back to the launch of the Boeing 707 in the 1950s.  Because the aircraft was so much more expensive than previous aircrafts, the industry came up with the operating lease concept to accommodate these jet liners that dominated air transport.  How it worked was the bank would buy the plane and lease it to the airline.  Because the bank never controlled or flew the plane, they never placed the asset on their balance sheet, and because the airline never owned the plane, they didn’t place it on their balance sheet either. They simply treated the monthly lease payments as rental expenses on the P&L.   August 2010 Original Lease Accounting Changes In August 2010, FASB and IASB decided to overhaul lease accounting as part of their joint commitment “to insure that investors and other users of financial statements are provided useful, transparent, and complete information about leasing transactions in the financial statements.”  Some say that the current lease accounting standards are broken because it keeps assets off the balance sheet, hidden from investors’ view. The original proposal abolished operating leases and only permitted capital leases where all leases would be recorded on the balance sheet as assets and liabilities. The asset side would reflect the right to use the asset for the leased term, and the liability side would reflect the obligation to make lease payments.   Why Companies Were Freaking Out According to the SEC, the financial impact of the aforementioned lease changes was estimated to add more than $1.3 trillion of operating lease obligations to corporate balance sheets. Many companies in various industries, especially retail, are concerned because the changes are significant and will impact existing leases with no grandfather clause for existing operating leases. Of course, the banks and airlines I mentioned earlier really hate this because neither wants to report the airplane (now costing around $60 M) as an asset. Regular companies were concerned that they would have to report routine short term leases of real estate or equipment as fixed assets, even though they were really just longer term rentals.  One company we spoke to leased roadside billboards, and really did not consider them to be fixed assets in any way. Obviously, these changes would have had a profound and lasting effect on a company’s financial and real estate strategies and significantly impact its financial statements.  Financial statements would show higher depreciation and interest expense with significantly higher total assets and debt. In terms of financial metrics, they’re negatively impacted. It would raise a company’s debt-to-capital ratio to reflect the higher debt compared to equity, it would negatively impact their return-on-assets because now companies will appear more asset intensive, and it will decrease EPS, lowering shareholder ROI. Feb. 2011 Recent Update The comment period on leases closed in December 2010. The FASB and the IASB have met several times since then and published their initial responses to the input they received from the various interested parties.  They are “redeliberating” the principles involved in Lease Accounting.  Some of the issues they are looking at include: The core definition of a lease.  This will articulate principles on what is a lease and what is “not-a-lease.” One theory or supposition is that they might define a lease as the transfer of certain but not all major ownership attributes for a certain period of time.  So a year’s lease of an aircraft might be a “lease,” but a year’s lease of half a floor in an office building would be “not-a-lease.”  The ownership attributes transferred from the core owner to the user are different; the airline must maintain, paint, and do whatever it needs to do on the aircraft. However, the office renter will have strictly limited rights in respect to the rented space. The differences between a lease contract and service contract.  Even if they call them “leases” for the purpose of commercial law, a service contract might not be accounted for as a lease. The accounting to be done by the lessee.  They would define when the bank or landlord would retain the asset on their balance sheet, and perhaps by implication, when the lessor would not need to include the asset on theirs.  So if the finance house keeps the airplane or office on their balance sheet, the tenant doesn’t need to.  I’m not sure that I can draw the opposite conclusion where the finance house doesn’t report but the tenant must. The difference, if any, between a financing lease and other leases, and the implications to the accounting. The present value calculation when renewable terms exist. They have reduced the circumstances in which one must look at the renewable terms of a lease in calculating the present value.  In most circumstances, you will use the lease term rather than the potential renewable term. Their latest discussion this past week with the contents of the discussion was not available at the time of me writing this entry.  For more details, the results of the discussions are posted on both the FASB and the IASB websites. Implied Software Changes Whatever the final rules turn out to be, all ERP systems, such as Oracle E-Business Suite, PeopleSoft Enterprise, JD Edwards, and Oracle Hyperion will need to change their software to accommodate the new rules. The following lists some changes that might have to be made to accounting software depending on what the final standards will be in June 2011: Lease tracking may require modifications with tracking of additional lease details that might require a centralized repository to maintain Accounting may need to be modified as there are many changes to how capital leases and the new “other than finance” leases are accounted for both on the lessee and lessor side.  For example, valuation, amortization, and disclosure will be considerably different requiring different types of data to be captured. Companies may need to modify their chart of accounts depending on how they want to track leases, which could then impact financial reporting and consolidation Business processes may require changes which could then impact internal controls Software applications may need to perform more advanced computations on leases Reports and KPIs may need to reflect new operating metrics Hold Onto Your Seats           Before you redo all your lease agreements and call your software vendors asking when the changes to the software will be made, remember that the rules are not finalized yet, and from appearances, will not reflect the proposals in the exposure draft.  Not only are there objections to putting the operating lease assets on anyone’s balance sheet, there are lots of objections to subjectivity and the data required for the valuation.  According to Seamus, there is huge opposition from New York bankers, the airlines, the EU, the Communist Party of China (since it impacts their exporting business), and Republicans (hearing complaints from small and large businesses). Even if everyone can agree on the proposed changes, 2013 might be the earliest that companies would need to change how they report leases. The Boards will finish their deliberations in April, May or June 2011.  As we’ve seen with other Exposure Drafts, if the changes are minor and the principles met the General Acceptance consensus criteria, the Standard could be finalized at that time.  However, if substantial changes are made, a fresh exposure draft, comment period, and review period might be involved, too. Seamus added an interesting perspective. Even if the proposed changes do pass, don’t you think our customers, such as Boeing, GE Capital, United Airlines, etc. will be clever enough to come up with a new kind of financing arrangement that complies with the new accounting? How about the large retail customers, such as Best Buy and Macerich? Don’t you think they might simply cut deals around retail locations with new contracts that prevent their leases from being capital leases? Instead of blindly adapting the software to meet the principles outlined in the final standard, our software needs to accommodate how businesses will respond to the new rules. We cannot know our customers’ responses until the rules are finalized. Oracle is aware of the potential changes and is staying abreast of the developments through our domain expertise staff, our relationship with customers, our market awareness, and, of course, our relationships with the Big 4. This is part of our normal process with respect to worldwide regulatory compliance. Oracle products have been IFRS and GAAP compliant for years and we will continue to maintain those standards going forward.

    Read the article

  • RMS java web framework

    - by Kamil Tomšík
    We're currently reconsidering technologies and frameworks to get more agile with "simple" RMS CRUD-based projects. In short, short-living things like this Right now we have custom extension on top of SmartGWT but after some time it has proven not to be enough flexible. I also personally dislike that java-js compilation process and the whole GWT codebase. Not only its ugly designed, it also makes certain low-level js things very complicated if not completely impossible. So what I'm looking for is: closest to web as possible, like JSF or possibly Tapestry, it is very important to be able get "low" and weave framework if necessary. Happens more often than we thought. datagrid capable - Ext.js & PrimeFaces looks pretty good, Vaadin does too. db-schema generators (optional, no matter in which way) If it were only on me, I'd probably stick to Ext.js + custom rest-based java solution, possibly generated from database schema (not sure about concrete tooling yet) I only does have experience with vanilla Ext.js, vanilla GWT and JSF 2.0 / Seam, so it kinda hard for me to judge or even propose other frameworks. What would be your proposition? What are the problems you've faced, what was your solution and how hard do you think it was to deal with them in "big picture"?

    Read the article

  • Bash History not containing all history and blank after reboot, how to resolve?

    - by TryTryAgain
    I've recently upgraded from 13.04 to 13.10 and realized my terminal bash history is not surviving reboots. cat ~/.bash_history gave me a permissions denied error. I, possibly unnecessarily or wrongly, issued a chmod 777 ~/.bash_history to see if that would help...and although I could then cat and read some contents it contained not much of anything as far as history. I also tried sudo rm ~/.bash_history after reading bash history not being preserved Strangely, after doing that, I typed a few test commands, ls, ls -lah ... and upon pressing the up arrow to go back through history it contained those two commands as well as the odd history from some far off time in the past but very few results and not the hundreds of commands I typed earlier in the day. Is there a new place bash history is stored? How can removing ~/.bash_history not get rid of the commands that are somehow lingering? I am not certain, but I believe my root bash history is acting normal. My user bash history is what's causing me trouble. Any help and guidance in tracking down and solving this problem is appreciated.

    Read the article

  • Burned CD-R are not identical to the input iso image, why?

    - by Grumbel
    I have the issue that sometimes when I burn an iso image to a CD-R with: sudo wodim -v driveropts=burnfree -data dev=/dev/scd0 input.iso And then read it back out again with: sudo dd if=/dev/cdrom of=output.iso dd: reading `/dev/cdrom': Input/output error ... That I end up with two iso images that are not identical, namely the output.iso is missing 2048 bytes at the end. When I however mount the iso image or CD-R and compare the actual files on the mountpoint, both are identical. Is that expected behavior or is that an actually incorrect burn of the data? And if its expected, how can I verify that the burn process was successful? The reason why I ask in the first place is that it seems to be reproducible behavior, certain iso images come out 2048 bytes short, even on repeated burns, but all burned CD-Rs are under themselves identical. Also what is the reason behind the: dd: reading `/dev/cdrom': Input/output error As it happens always, I assume it is normal, but what is the technical reason behind it? I assume CDs don't allow the device to detect the size directly, so dd reads till it encounters the end the hard way. Edit: User karol on superusers.com mentioned that both the size issue and the read error are the result of using -tao (default) in wodim instead of -dao mode. I couldn't yet test it, but it sounds like the most plausible explanation so far.

    Read the article

  • XOLO X900–First mobile phone with Intel Power

    - by Rekha
    XOLO X900, XOLO’s offering the world’s first smart phone with the power of Intel inside® shaking hands with LAVA International Ltd., India’s fastest growing handset brands. The R&D Centre is in Shenzhan (China) and Bangalore (India). The smart phone has a fast web browsing with the 1.6 GHz Intel processor and smooth multi-tasking process using Intel patented Hyper Threading technology.It has an optimum battery usage, 4.03” hi-resolution of 1024X600 pixels LCD screen to ensure crisp text and vibrant images, HDMI Output port for TV, full HD 1080p playback and dual speakers. It has a camera of 8MP HD camera with certain DSLR like features allowing to click upto 10 photos in less than a second. 3D and HD gaming is immensely realistic with 400 MHz Graphics Processing Unit. The Operating System used here is Android 2.3 (Gingerbread) and upgradable to Android 4.0. It has the GPS facility and rear and front cameras with 8MP and 1.3MP respectively.  They have enabled Accelerometer, Gyroscope, Magnetometer, Ambient light sensor and Proximity sensor in this smart phone. Intel’s smartphone venture is beginning in India first. It is said to be available for sale in Indian from April 23, 2011 onwards. The price is at a best-buy price of INR 22,000 approximately. The smartphone will be available at the Indian retail chain Croma. The phone will available in other retail stores and online stores from early May. The company is launching the smartphone in India first and a more powerful handset in China later this year. According to their success in India and China, Intel is planning to come into Europe and US market. Till then, Intel smartphones are only for Indian buyers. You can more technical information from the XOLO’s site.

    Read the article

  • Terminology: .NET C++ vs. traditional C++

    - by Mike Clark
    Hello. I've recently been working with a team that's using both .NET C++ and pre-.NET C++. I fully understand the technical differences between the two technologies. However, I sometimes feel like I'm floundering when it comes to the terminology used to differentiate the two. Example: Say we have two projects: ProjectA contains "C++" code that builds a .NET assembly DLL. ProjectB contains Visual C++ code that builds a traditional native Windows DLL. What is the best way to succinctly and terminologically draw a distinction between the two projects? Again, I'm not asking for an in-depth technical description of the differences between the two technologies. I'm just looking for names and labels. This is how I might try to make the distinction when talking to someone about Project A and Project B: "ProjectA is a managed .NET C++ project" and ProjectB is an unmanaged Visual C++ DLL project." However I am not at all certain that this terminology is ideal, or even correct. Please describe what you feel the ideal language to use in this situation (or similar situations) might be. Feel free to motivate your answer.

    Read the article

  • How advanced are author-recognition methods?

    - by Nick Rtz
    From a written text by an author if a computer program analyses the text, how much can a computer program tell today about the author of some (long enough to be statistically significant) texts? Can the computer program even tell with "certainty" whether a man or a woman wrote this text based solely on the contents of the text and not an investigation such as ip numbers etc? I'm interested to know if there are algorithms in use for instance to automatically know whether an author was male or female or similar characteristics of an author that a computer program can decide based on analyses of the written text by an author. It could be useful to know before you read a message what a computer analyses says about the author, do you agree? If I for instance get a longer message from my wife that she has had an accident in Nigeria and the computer program says that with 99 % probability the message was written by a male author in his sixties of non-caucasian origin or likewise, or by somebody who is not my wife, then the computer program could help me investigate why a certain message differs in characteristics. There can also be other uses for instance just detecting outliers in a geographically or demographically bounded larger data set. Scam detection is the obvious use I'm thinking of but there could also be other uses. Are there already such programs that analyse a written text to tell something about the author based on word choice, use of pronouns, unusual language usage, or likewise?

    Read the article

  • Octree implementation for fustrum culling

    - by Manvis
    I'm learning modern (=3.1) OpenGL by coding a 3D turn based strategy game, using C++. The maps are composed of 100x90 3D hexagon tiles that range from 50 to 600 tris (20 different types) + any player units on those tiles. My current rendering technique involves sorting meshes by shaders they use (minimizing state changes) and then calling glDrawElementsInstanced() for drawing. Still get solid 16.6 ms/frame on my GTX 560Ti machine but the game struggles (45.45 ms/frame) on an old 8600GT card. I'm certain that using an octree and fustrum culling will help me here, but I have a few questions before I start implementing it: Is it OK for an octree node to have multiple meshes in it (e.g. can a soldier and the hex tile he's standing on end up in the same octree node)? How is one supposed to treat changes in object postion (e.g. several units are moving 3 hexes down)? I can't seem to find good a explanation on how to do it. As I've noticed, soting meshes by shaders is a really good way to save GPU. If I put node contents into, let's say, std::list and sort it before rendering, do you think I would gain any performance, or would it just create overhead on CPU's end? I know that this sounds like early optimization and implementing + testing would be the best way to find out, but perhaps someone knows from experience?

    Read the article

< Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >