Search Results

Search found 5666 results on 227 pages for 'cost analysis'.

Page 88/227 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Alkan Improves Aeronautical-Equipment Product Collaboration, Design Processes, and Government Compliance

    - by Gerald Fauteux
    Alkan S.A. a leading aeronautical equipment manufacturer in France, specializing in carriage-release and ejection systems for various types of military aircraft utilize Oracle’s AutoVue Electro-Mechanical Professional for Agile as part of its Agile Product Lifecycle Management solution. AutoVue Electro-Mechanical Professional for Agile enables multiformat 3-D viewing of engineering designs, leading to deeper analysis of component and product functionality and allows all teams to easily participate and contribute to product data early in the development cycle. Alkan S.A.’s equipment is used in more than 65 countries and is certified for more than 60 types of aircraft, worldwide. Click here to read the complete story. French version.

    Read the article

  • New Thinking for Supply Chain Analytics. PLM for Process. And Untangling Services Complexity.

    - by David Hope-Ross
    The first edition of the quarterly Oracle Information InDepth Value Chain and Procurement Transformation newsletter has just been published. It’s a solid round-up of news and analysis from the fast-moving world of global supply chains and supply management.  As the title of this post implies, the latest edition covers a wide array of great topics. But the story on supply chain analytics from Endeca is especially interesting. Without giving away the ending, it explores new ways of thinking about the value of information and how to exploit it for supply chain improvement. If you enjoy this edition, think about opting-in via the subscription link. It is an easy way to keep up with the latest and greatest.

    Read the article

  • How an LED-lit LCD Monitor Works [Video]

    - by Jason Fitzpatrick
    There’s a good chance you’re staring at one right now, the common LCD monitor. How exactly does it work? Find out by watching this informative video. Bill Hammack, the engineer behind the Engineer Guy video series, takes apart an LCD monitor and gives a detailed analysis of what’s going on inside as he rebuilds it–including how the pixels function, what the screen is constructed off, and how the light is diffused. LCD Monitor Teardown [YouTube via Hack A Day] HTG Explains: What’s the Difference Between the Windows 7 HomeGroups and XP-style Networking?Internet Explorer 9 Released: Here’s What You Need To KnowHTG Explains: How Does Email Work?

    Read the article

  • An Actionable Common Approach to Federal Enterprise Architecture

    - by TedMcLaughlan
    The recent “Common Approach to Federal Enterprise Architecture” (US Executive Office of the President, May 2 2012) is extremely timely and well-organized guidance for the Federal IT investment and deployment community, as useful for Federal Departments and Agencies as it is for their stakeholders and integration partners. The guidance not only helps IT Program Planners and Managers, but also informs and prepares constituents who may be the beneficiaries or otherwise impacted by the investment. The FEA Common Approach extends from and builds on the rapidly-maturing Federal Enterprise Architecture Framework (FEAF) and its associated artifacts and standards, already included to a large degree in the annual Federal Portfolio and Investment Management processes – for example the OMB’s Exhibit 300 (i.e. Business Case justification for IT investments).A very interesting element of this Approach includes the very necessary guidance for actually using an Enterprise Architecture (EA) and/or its collateral – good guidance for any organization charged with maintaining a broad portfolio of IT investments. The associated FEA Reference Models (i.e. the BRM, DRM, TRM, etc.) are very helpful frameworks for organizing, understanding, communicating and standardizing across agencies with respect to vocabularies, architecture patterns and technology standards. Determining when, how and to what level of detail to include these reference models in the typically long-running Federal IT acquisition cycles wasn’t always clear, however, particularly during the first interactions of a Program’s technical and functional leadership with the Mission owners and investment planners. This typically occurs as an agency begins the process of describing its strategy and business case for allocation of new Federal funding, reacting to things like new legislation or policy, real or anticipated mission challenges, or straightforward ROI opportunities (for example the introduction of new technologies that deliver significant cost-savings).The early artifacts (i.e. Resource Allocation Plans, Acquisition Plans, Exhibit 300’s or other Business Case materials, etc.) of the intersection between Mission owners, IT and Program Managers are far easier to understand and discuss, when the overlay of an evolved, actionable Enterprise Architecture (such as the FEA) is applied.  “Actionable” is the key word – too many Public Service entity EA’s (including the FEA) have for too long been used simply as a very highly-abstracted standards reference, duly maintained and nominally-enforced by an Enterprise or System Architect’s office. Refreshing elements of this recent FEA Common Approach include one of the first Federally-documented acknowledgements of the “Solution Architect” (the “Problem-Solving” role). This role collaborates with the Enterprise, System and Business Architecture communities primarily on completing actual “EA Roadmap” documents. These are roadmaps grounded in real cost, technical and functional details that are fully aligned with both contextual expectations (for example the new “Digital Government Strategy” and its required roadmap deliverables - and the rapidly increasing complexities of today’s more portable and transparent IT solutions.  We also expect some very critical synergies to develop in early IT investment cycles between this new breed of “Federal Enterprise Solution Architect” and the first waves of the newly-formal “Federal IT Program Manager” roles operating under more standardized “critical competency” expectations (including EA), likely already to be seriously influencing the quality annual CPIC (Capital Planning and Investment Control) processes.  Our Oracle Enterprise Strategy Team (EST) and associated Oracle Enterprise Architecture (OEA) practices are already engaged in promoting and leveraging the visibility of Enterprise Architecture as a key contributor to early IT investment validation, and we look forward in particular to seeing the real, citizen-centric benefits of this FEA Common Approach in particular surface across the entire Public Service CPIC domain - Federal, State, Local, Tribal and otherwise. Read more Enterprise Architecture blog posts for additional EA insight!

    Read the article

  • How come verification does not include actual testing?

    - by user970696
    Having read a lot about this topic, I still did not get it. Verification should prove that you are building the product right, while validation you build the right product. But only static techniques are mentioned as being verification methods (code reviews, requirements checks...). But how can you say if its implemented correctly if you do not test it? It is said that verification checks e.g. code for its correctnes. Verification - ensure that the product meet specified requirements. Again, if the function is specified to work somehow, only by testing I can say that it does. Could anyone explain this to me please? EDIT: As Wiki says: Verification:Preparing of the test cases (based on the analysis of the requireemnts) Validation: Running of the test cases

    Read the article

  • Issues with the intended behavior of a Service layer?

    - by Rafael Cichocki
    This analysis makes sense, and states anything that avoids code duplication and simplifies maintenance speaks for a service layer. What is the technical behavior? When a service client references a service, does it do so at runtime, or does it happen at compile time? When I change something in the service layer code, will this change be automatically taken into account in all it's clients, or do they need to be individually recompiled? How does this make sense from a testing point of view - I have working code, based on some code from a service, but if that service changes, my code might break?!

    Read the article

  • AJI Software is now a Microsoft Gold Application Lifecycle Management (ALM) Partner

    - by Jeff Julian
    Our team at AJI Software has been hard at work over the past year on certifications and projects that has allowed us to reach Gold Partner status in the Microsoft Partner Program.  We have focused on providing services that not only assist in custom software development, but process analysis and mentoring.  I definitely want to thank each one of our team members for all their work.  We are currently the only Microsoft Gold ALM Partner for a 500 mile radius around Kansas City. If you or your team is in need of assistance with Team Foundation Server, Agile Processes, Scrum Mentoring, or just a process/team assessment, please feel free to give us a call.  We also have practices focused on SharePoint, Mobile development (iOS, Android, Windows Mobile), and custom software development with .NET.  Technorati Tags: Gold Partner,ALM,Scrum,TFS,AJI Software

    Read the article

  • Formalizing programmers errors

    - by Maksee
    Every one of us make errors leading to bugs. Once I wanted to start logging my errors for future analysis, probably mentioning project title, approximate time spent and the most important, the type of error. For example when I copy-pasted a fragment about 'x' and replaced every occurrence of 'x' with 'y' and forgot to replace a tiny piece, this goes to 'copy-paste error'. The usefulness of this approach depends on whether I can formalize my errors at all and probably minimizing the number of types to choose from. Otherwise I would start postponing, ignoring and so on so make this system useless. Are there existing research in this area, probably a known minimum set of errors? Maybe some of you already tried to implement something like this and succeeded/failed?

    Read the article

  • Tab Sweep - State of Java EE, Dynamic JPA, Java EE performance, Garbage Collection, ...

    - by alexismp
    Recent Tips and News on Java EE 6 & GlassFish: • Java EE: The state of the environment (SDTimes) • Extend your Persistence Unit on the fly (EclipseLink blog) • Glassfish 3.1 - AccessLog Format (Ralph) • Java Enterprise Performance - Unburdended Applications (Lucas) • Java Garbage Collection and Heap Analysis (John) • Qu’attendez-vous de JMS 2.0? (Julien) • Dynamically registering WebFilter with Java EE 6 (Markus)

    Read the article

  • Non use of persisted data

    - by Dave Ballantyne
    Working at a client site, that in itself is good to say, I ran into a set of circumstances that made me ponder, and appreciate, the optimizer engine a bit more. Working on optimizing a stored procedure, I found a piece of code similar to : select BillToAddressID, Rowguid, dbo.udfCleanGuid(rowguid) from sales.salesorderheaderwhere BillToAddressID = 985 A lovely scalar UDF was being used,  in actuality it was used as part of the WHERE clause but simplified here.  Normally I would use an inline table valued function here, but in this case it wasn't a good option. So this seemed like a pretty good case to use a persisted column to improve performance. The supporting index was already defined as create index idxBill on sales.salesorderheader(BillToAddressID) include (rowguid) and the function code is Create Function udfCleanGuid(@GUID uniqueidentifier)returns varchar(255)with schemabindingasbegin Declare @RetStr varchar(255) Select @RetStr=CAST(@Guid as varchar(255)) Select @RetStr=REPLACE(@Retstr,'-','') return @RetStrend Executing the Select statement produced a plan of : Nothing surprising, a seek to find the data and compute scalar to execute the UDF. Lets get optimizing and remove the UDF with a persisted column Alter table sales.salesorderheaderadd CleanedGuid as dbo.udfCleanGuid(rowguid)PERSISTED A subtle change to the SELECT statement… select BillToAddressID,CleanedGuid from sales.salesorderheaderwhere BillToAddressID = 985 and our new optimized plan looks like… Not a lot different from before!  We are using persisted data on our table, where is the lookup to fetch it ?  It didnt happen,  it was recalculated.  Looking at the properties of the relevant Compute Scalar would confirm this ,  but a more graphic example would be shown in the profiler SP:StatementCompleted event. Why did the lookup happen ? Remember the index definition,  it has included the original guid to avoid the lookup.  The optimizer knows this column will be passed into the UDF, run through its logic and decided that to recalculate is cheaper than the lookup.  That may or may not be the case in actuality,  the optimizer has no idea of the real cost of a scalar udf.  IMO the default cost of a scalar UDF should be seen as a lot higher than it is, since they are invariably higher. Knowing this, how do we avoid the function call?  Dropping the guid from the index is not an option, there may be other code reliant on it.   We are left with only one real option,  add the persisted column into the index. drop index Sales.SalesOrderHeader.idxBillgocreate index idxBill on sales.salesorderheader(BillToAddressID) include (rowguid,cleanedguid) Now if we repeat the statement select BillToAddressID,CleanedGuid from sales.salesorderheaderwhere BillToAddressID = 985 We still have a compute scalar operator, but this time it wasnt used to recalculate the persisted data.  This can be confirmed with profiler again. The takeaway here is,  just because you have persisted data dont automatically assumed that it is being used.

    Read the article

  • Inspection, code review - is it really testing?

    - by user970696
    ISTQB, Wikipedia or other sources classify verification acitivities (reviews etc.) as a static testing, yet other do not. If we can say that peer reviews and inspections are actually a kind of a testing, then a lot of standards do not make sense (consider e.g. ISO which say that validation is done by testing, while verification by checking of work products) - it should at least say dynamic testing for validation, shouldn't it? I am completing master thesis dealing with QA and I must admit that I have never seen worse and more ambiguous and contradicting literature than in this field :/ Do you think (and if so, why) that static testing is a good and justifiable term or should we stick to testing and static checks/analysis?

    Read the article

  • Persisting natural language processing parsed data

    - by tjb1982
    I've recently started experimenting with natural language processing (NLP) using Stanford's CoreNLP, and I'm wondering what are some of the standard ways to store NLP parsed data for something like a text mining application? One way I thought might be interesting is to store the children as an adjacency list and make good use of recursive queries (Postgres supports this and I've found it works really well). But I assume there are probably many standard ways to do this depending on what kind of analysis is being done that have been adopted by people working in the field over the years. So what are the standard persistence strategies for NLP parsed data and how are they used?

    Read the article

  • Where can I find accessible bug/issue databases with complete revision history

    - by namenlos
    I'm performing some research and analysis on bug/issue tracking databases and more specifically on how programmers and teams of programmers actually interact with them. What I'm looking for involves understanding how those databases change over time. So what I don't need for example: is a database of all the bugs of some open source project as the bugs exist today. What I do need is a complete set of revision history for every issue/bug in the database. This would enable me to pick a specific datetime and say here were the list of all the issues/bugs that existed at that moment in time. Anyway know of some publicly accessible issue/bug databases that expose this revision data? Ideally, the revision would look something like this (shown for a single bug, with two revisions) ISSUEID PRI SEV ASSIGNEDTO MODIFIEDON VALIDUNTIL 1 2 2 mel apr-1-2010:5pm apr-1-2010:6pm 1 2 3 steve apr-1-2010:6pm NULL

    Read the article

  • Gaming on Cloud

    - by technomad
    Sometimes I wonder the pundits of cloud computing are way to consumed with the enterprise applications. With all the CAPEX / OPEX, ROI-talk taking the center stage, an opportunity to affect masses directly is getting overlooked. I am a self proclaimed die hard gamer. I come from the generation of gamers who started their journey in DOS games like Wolfenstein 3D and Allan Border Cricket (the latter is still a favorite pastime). In the late 90s, a revolution called accelerated graphics started in DirectX and OpenGL. Games got more advanced. Likes of Quake III and Unreal Tournament became the crown jewels of the industry. But with all these advancements, there started a race. A race of GFX giants ATI and NVIDIA to beat each other for better frame and image quality. Revisions to the graphics chipsets became frequent. Games became eye candies but at the cost of more GPU power / memory. Every eagerly awaited title started demanding more muscle power in graphics and PC hardware. Latest games and all the liquid smooth frame rates became the territory of the once with deep pockets who could spend lavishly on latest hardware. Enthusiasts like yours truly, who couldn’t afford this route, started exploring over-clocking, optimized hardware cooling... etc. to pursue the passion. Ever rising cost of hardware requirements lead to rampant piracy of PC games. Gamers were willing to spend on the latest titles, but the ones with tight budget prefer hardware upgrades against a legal copy of the game. It was also fueled by emergence of the P2P file sharing networks. Then came the era of Xbox and PS3s. It solved the major issue of hardware standardization and provided an alternative to ever increasing hardware costs. I have always admired these consoles, but being born and brought up in a keyboard/mouse environment, I still find it difficult to play first person shooters with a gamepad. I leave the topic of PC v/s Consol gaming for another day, but the bottom line is… PC gamers deserve an equally democratized solution. This is where I think Cloud Computing can come to rescue. It can minimize hardware requirements. Virtually end the software piracy and rationalize costs for gamers. Subscription based models like pay-as-you-play. In game rewards, like extended subscription credits for exceptional gamers (oh yes, I have beaten Xaero on nightmare in Quake III, time and again!) Easy deployment for patches and fixes. Better game AI. The list goes on and on… Fortunately, companies like OnLive are thinking in the same direction. Their gaming service is all set to launch on 17th June 2010 in E3 2010 expo in L.A. I wish them all the luck. I hope they will start a trend which will bring the smiles back on the face of budget gamers with the help of cloud computing.

    Read the article

  • Do you use to third party companies to review your company's code?

    - by CodeToGlory
    I am looking to get the following - Basic code review to make sure they follow the guidelines imposed. Security code analysis to make sure there are no loopholes. No performance bottlenecks by doing a load test etc. We have lot of code coming in from third parties and is becoming laborious to manage code reviews and hence looking to see if others employ such practices. I understand that it may be a concern for some and would raise the question "Well, who is going to make sure the agency is doing their job right?" But basically I am just looking for a third party who can hold all vendor code to the same standards.

    Read the article

  • How many lines of code can a C# developer produce per month?

    - by lox
    An executive at my workplace asked me and my group of developers the question: How many lines of code can a C# developer produce per month? An old system was to be ported to C# and he would like this measure as part of the project planning. From some (apparently creditable) source he had the answer of "10 SLOC/month" but he was not happy with that. The group agreed that this was nearly impossible to specify because it would depend on a long list of circumstances. But we could tell that the man would not leave (or be very disappointed in us) if we did not come up with an answer suiting him better. So he left with the many times better answer of "10 SLOC/day" Can this question be answered? (offhand or even with some analysis)

    Read the article

  • Head in the Clouds

    - by Tony Davis
    We're just past the second anniversary of the launch of Windows Azure. A couple of years' experience with Azure in the industry has provided some obvious success stories, but has deflated some of the initial marketing hyperbole. As a general principle, Azure seems to work well in providing a Service-Oriented Architecture for services in enterprises that suffer wide fluctuations in demand. Instead of being obliged to provide hardware sufficient for the occasional peaks in demand, one can hire capacity only when it is needed, and the cost of hosting an application is no longer a capital cost. It enables companies to avoid having to scale out hardware for peak periods only to see it underused for the rest of the time. A customer-facing application such as a concert ticketing system, which suffers high demand in short, predictable bursts of activity, is a great example of an application that would work well in Azure. However, moving existing applications to Azure isn't something to be done on impulse. Unless your application is .NET-based, and consists of 'stateless' components that communicate via queues, you are probably in for a lot of redevelopment work. It makes most sense for IT departments who are already deep in this .NET mindset, and who also want 'grown-up' methods of staging, testing, and deployment. Azure fits well with this culture and offers, as a bonus, good Visual Studio integration. The most-commonly stated barrier to porting these applications to Azure is the problem of reconciling the use of the cloud with legislation for data privacy and security. Putting databases in the cloud is a sticky issue for many and impossible for some due to compliance and security issues, the need for direct control over data, and so on. In the face of feedback from the early adopters of Azure, Microsoft has broadened the architectural choices to cater for a wide range of requirements. As well as SQL Azure Database (SAD) and Azure storage, the unstructured 'BLOB and Entity-Attribute-Value' NoSQL storage alternative (which equates more closely with folders and files than a database), Windows Azure offers a wide range of storage options including use of services such as oData: developers who are programming for Windows Azure can simply choose the one most appropriate for their needs. Secondly, and crucially, the Windows Azure architecture allows you the freedom to produce hybrid applications, where only those parts that need cloud-based hosting are deployed to Azure, whereas those parts that must unavoidably be hosted in a corporate datacenter can stay there. By using a hybrid architecture, it will seldom, if ever, be necessary to move an entire application to the cloud, along with personal and financial data. For example that we could port to Azure only put those parts of our ticketing application that capture and process tickets orders. Once an order is captured, the financial side can be processed in our own data center. In short, Windows Azure seems to be a very effective way of providing services that are subject to wide but predictable fluctuations in demand. Have you come to the same conclusions, or do you think I've got it wrong? If you've had experience with Azure, would you recommend it? It would be great to hear from you. Cheers, Tony.

    Read the article

  • Why is it always "what language should I learn next" instead of "what project should I tackle next"?

    - by MikeRand
    Hi all, Why do beginning programmers (like me) always ask about the next language they should learn instead of asking about the next project to tackle? Why did Eric Raymond, in the "Learn How To Program" section of his "How To Become A Hacker" essay, talk about the order in which you should learn languages (vs. the order in which you should tackle projects). Do beginning carpenters ask "I know how to use a hammer ... should I learn how to use a saw or a level next?" I ask because I'm finding that almost any meaningful project I'm interested in tackling (e.g. a web app, a set of poker analysis tools) requires that I learn just enough of a multitude of languages (Python, C, HTML, CSS, Javascript, SQL) and frameworks/libraries (wxPython, tkinter, Django) to implement them. Thanks, Mike

    Read the article

  • List of freely available SEO tools (software) for keyword rank checking? [closed]

    - by Craig
    Possible Duplicate: can anyone reccommend a Google SERP tracker? Requirements: Analysis of site positions on the list of keywords in different search engines; Track keyword positions on search engines. I want see if my keyword rankings have moved up or down; Creating reports. I use Excel + Rank Checker addon for Firefox, to analyze the position of the site in search engines for my keyword list. Are there any tools which tested and working properly. Thanks.

    Read the article

  • OSB, Service Callouts and OQL - Part 1

    - by Sabha
    Oracle Fusion Middleware customers use Oracle Service Bus (OSB) for virtualizing Service endpoints and implementing stateless service orchestrations. Behind the performance and speed of OSB, there are a couple of key design implementations that can affect application performance and behavior under heavy load. One of the heavily used feature in OSB is the Service Callout pipeline action for message enrichment and invoking multiple services as part of one single orchestration. Overuse of this feature, without understanding its internal implementation, can lead to serious problems. This post will delve into OSB internals, the problem associated with usage of Service Callout under high loads, diagnosing it via thread dump and heap dump analysis using tools like ThreadLogic and OQL (Object Query Language) and resolving it. The first section in the series will mainly cover the threading model used internally by OSB for implementing Route Vs. Service Callouts. Please refer to the blog post for more details. 

    Read the article

  • Is it a good idea to appoint one of the scrum team member or scrum master as Product Owner?

    - by Sandy
    Lately we had a project, in which client was busy touring. As usual scrum team was formed, management decided to appoint our analyst as Product owner since Client won’t be able to participate actively. Analyst was the one who worked closely with client for requirement analysis and specification drafting. Client doesn’t have the time to review first two releases. Everything went smoothly until, client saw third release; he wasn’t satisfied with some functionalities, and those was introduced by make shift Product Owner (our analyst). We were told to wait till design team finished mock-up of all pages and client checked each one and approved to continue working. Scrum team is there, but no sprints – we finished work almost like classic waterfall method. Is it a good idea to appoint scrum team member or master as product owner? Do we need to follow scrum in the absence of client/product owner participation?

    Read the article

  • migrating from struts2, looking for a new framework

    - by adhg
    We are supposed to start a relatively big project that will require lots of computation and analysis. Presentation (UI) for the end user is very crucial (graphs, tabels...) So far we've been using struts2. It's ok+. It has some drawbacks (specially if you work with tiles and all that XML) but if you get the lingo - you're ok. One option on the table is to continue using struts2 with jquery and all the other stuff that we've been doing for so long. Alternatively, I think we have an opportunity to learn something new and maybe a bit better then struts2. My question is this: Anyone has migrated from struts2 to something new and can share the experience. Or had some great experience witha particular java framework. Many thanks for any pointers.

    Read the article

  • Azure Futures - Distributed Computing and Number Crunching

    - by JoshReuben
    "the biggest Azure customers today are the ones using HPC on-premises at the current time" - http://www.zdnet.com/blog/microsoft/windows-azure-futures-turning-the-cloud-into-a-supercomputer/8592?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+zdnet%2Fmicrosoft+%28ZDNet+All+About+Microsoft%29&utm_content=Google+Reader   Orleans Framework for cloud computing - http://research.microsoft.com/en-us/projects/orleans     HPC on Azure - http://www.zdnet.com/blog/microsoft/microsoft-finalizes-its-latest-supercomputing-operating-system-release/7414   Dryad is Microsoft’s competitor to Google MapReduce and Apache Hadoop  - http://www.zdnet.com/blog/microsoft/microsoft-takes-a-step-toward-commercializing-its-dryad-distributed-computing-technologies/8255?tag=mantle_skin;content   SQL Server Analysis Services DataMining in the cloud - http://www.sqlmag.com/article/reporting2/azure-data-mining-in-the-cloud.aspx

    Read the article

  • Cloud Computing Pricing - It's like a Hotel

    - by BuckWoody
    I normally don't go into the economics or pricing side of Distributed Computing, but I've had a few friends that have been surprised by a bill lately and I wanted to quickly address at least one aspect of it. Most folks are used to buying software and owning it outright - like buying a car. We pay a lot for the car, and then we use it whenever we want. We think of the "cloud" services as a taxi - we'll just pay for the ride we take an no more. But it's not quite like that. It's actually more like a hotel. When you subscribe to Azure using a free offering like the MSDN subscription, you don't have to pay anything for the service. But when you create an instance of a Web or Compute Role, Storage, that sort of thing, you can think of the idea of checking into a hotel room. You get the key, you pay for the room. For Azure, using bandwidth, CPU and so on is billed just like it states in the Azure Portal. so in effect there is a cost for the service and then a cost to use it, like water or power or any other utility. Where this bit some folks is that they created an instance, played around with it, and then left it running. No one was using it, no one was on - so they thought they wouldn't be charged. But they were. It wasn't much, but it was a surprise.They had the hotel room key, but they weren't in the room, so to speak. To add to their frustration, they had to talk to someone on the phone to cancel the account. I understand the frustration. Although we have all this spelled out in the sign up area, not everyone has the time to read through all that. I get that. So why not make this easier? As an explanation, we bill for that time because the instance is still running, and we have to tie up resources to be available the second you want them, and that costs money. As far as being able to cancel from the portal, that's also something that needs to be clearer. You may not be aware that you can spin up instances using code - and so cancelling from the Portal would allow you to do the same thing. Since a mistake in code could erase all of your instances and the account, we make you call to make sure you're you and you really want to take it down. Not a perfect system by any means, but we'll evolve this as time goes on. For now, I wanted to make sure you're aware of what you should do. By the way, you don't have to cancel your whole account not to be billed. Just delete the instance from the portal and you won't be charged. You don't have to call anyone for that. And just FYI - you can download the SDK for Azure and never even hit the online version at all for learning and playing around. No sign-up, no credit card, PO, nothing like that. In fact, that's how I demo Azure all the time. Everything runs right on your laptop in an emulated environment.  

    Read the article

  • Management Software in Java for Networked Bus Systems

    - by Geertjan
    Telemotive AG develops complex networked bus systems such as Ethernet, MOST, CAN, FlexRay, LIN and Bluetooth as well as in-house product developments in infotainment, entertainment, and telematics related to driver assistance, connectivity, diagnosis, and e-mobility. Devices such as those developed by Telemotive typically come with management software, so that the device can be configured. (Just like an internet router comes with management software too.) The blue AdmiraL is a development and analysis device for the APIX (Automotive Pixel Link) technology. Here is its management tool: The blue PiraT is an optimised multi-data logger, developed by Telemotive specifically for the automotive industry. With the blue PiraT the communication of bus systems and control units are monitored and relevant data can be recorded very precisely. And here is how the tool is managed: Both applications are created in Java and, as clearly indicated in many ways in the screenshots above, are based on the NetBeans Platform. More details can be found on the Telemotive site.

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >