Search Results

Search found 19536 results on 782 pages for 'events management'.

Page 24/782 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • What's the best way to sell ReSharper to management? [closed]

    - by Jackson Pope
    Possible Duplicate: How do you convince your boss to buy useful tools like Resharper, LinqPad? I've recently started a new job developing code in C# and ASP.Net. At a previous employer I've used ReSharper from JetBrains and I loved it. I've downloaded the free trial in my new job, as have several of my new colleagues on my recommendation. Everyone thinks it's great. But now our trials are coming to an end and it's time to buy or say goodbye. I've been reliably informed that getting money for tools from senior management is like trying to get blood from a stone, so how can I convince them to loosen their grip on the purse strings and buy it for our team (of seven developers)? Does anyone have any experience of convincing management of the benefits of refactoring tools? I feel the benefit every second I use it, but I'm having difficulty thinking of how to explain the concrete benefits to a manager who only think

    Read the article

  • How to convince management of making our project open source?

    - by MrSoundless
    Xamarin 3 was released last week with a great new addition: Xamarin.Forms . This triggered our attention because we've been using such a system for a couple of years now. We've developed it by ourselves and used it for a bunch of projects. We've been looking for a way to make this project open source but we didn't manage to convince the management. They believe we should not make it open source because we won't win anything with it and all that will happen is that the competition will be able to build apps quicker with our library. We believe open sourcing our library will make the world a better place and that it will make our library much more stable and complete. So my question to all you people out there: How can we convince the management to open source our library?

    Read the article

  • What is the best agile project management technique for developing innovative software systems?

    - by user654019
    I am involved with the development of innovative software. The development is innovative since we don't know how to develop it and what algorithm should we use to implement and nobody else did it before. The process consists of several stages of studying books/papers, suggesting algorithms, writing prototypes and comparing the result with actual data. We hope that after some iteration, we converge to a valid software system. What is the best project management approach that we can use? Is there any project management software for these types of projects?

    Read the article

  • Announcing General Availability of the E-Business Suite Plug-in

    - by Kenneth E.
    Oracle E-Business Suite Application Technology Group (ATG) is pleased to announce the General Availability of Oracle E-Business Suite Plug-in 12.1.0.1.0, an integral part of Application Management Suite for Oracle E-Business Suite.The combination of Enterprise Manager 12c Cloud Control and the Application Management Suite combines functionality that was available in the standalone Application Management Pack for Oracle E-Business Suite and Application Change Management Pack for Oracle E-Business Suite with Oracle’s Real User Experience Insight product and the Configuration & Compliance capabilities to provide the most complete solution for managing Oracle E-Business Suite applications. The features that were available in the standalone management packs are now packaged into the Oracle E-Business Suite Plug-in, which is now fully certified with Oracle Enterprise Manager 12c Cloud Control. This latest plug-in extends Cloud Control with E-Business Suite specific system management capabilities and features enhanced change management support.Here is all the information you need to get started:EBS Plug-in 12.1.0.1.0 info -Full Announcement•    E-Business Suite Plug-in 12.1.0.1 for Enterprise Manager 12c Now Available MOS -•    Getting Started with Oracle E-Business Suite Plug-in, Release 12.1.0.1.0 (Doc ID 1434392.1)Documentation -•    Oracle Application Management Pack for Oracle E-Business Suite Guide, Release 12.1.0.1.0Certification•    Platforms and OS Release certification information is available from My Oracle Support via the Certification page. •    Search using the official trademark name Oracle Application Management Pack for Oracle E-Business Suite and Release 12.1.0.1.0

    Read the article

  • Microsoft Events Come Back to Fort Collins

    - by Jeff Certain
    It’s been a while since Microsoft MSDN and TechNet events have been in Fort Collins. I’m very pleased to be able to pass on Microsoft’s announcement that on April 21st, these events will be held at the Drake Center as half-day events. A huge “thank you” to Erin Dolan, Joe Shirey and Daniel Egan for making this happen! Join us for an in-person event you won’t want to miss! No matter what your role, you’ll find an event series that fits what you do—and what the 2010 products from Microsoft have to offer. Join us for Launch 2010 Highlights— a live, half-day event featuring the most popular sessions from the Launch 2010 Technical Readiness Series, presented by our own MSDN and TechNet Roadshow Evangelists. We've taken the top content from this lively series and packaged it up in two half-day sessions in Fort Collins. The morning will focus on IT pros, with hands-on tactics for boosting productivity with Microsoft Office® 2010 and SharePoint® 2010. In the afternoon, developers will learn how Microsoft® Visual Studio® 2010 supports rich platforms and promotes creativity, collaboration and much more. Register now and save your seat for these free, half-day events. Registration links: TechNet and MSDN Event

    Read the article

  • Kent .Net/SqlServer User Group – Upcoming events

    - by Dave Ballantyne
    At the Kent user group we have two upcoming events.  Both are to be held at F-Keys Training suite http://f-keys.co.uk/ in Rochester, Kent. If you haven’t attended before please note the location here. 14-June Is your code S.O.L.I.D ? Nathan Gloyn Everybody keeps on about SOLID principles but what are they? and why should you care? This session is an introduction to SOLID and I'll aim to walk through each principle telling you about that principle and then show how a code base can be refactored using the principles to make your life easier, Come the end of the session you should have a basic understanding of the principle, why to use it and how using it can improve your code. Building composite applications with OpenRasta 3 Sebastien Lambla A wave of change is coming to Web development on .NET. Packaging technologies are bringing dependency management to .NET for the first time, streamlining development workflow and creating new possibilities for deployment and administration. The sky's the limit, and in this session we'll explore how open frameworks can help us leverage composition for the web. Register here for this event http://www.eventbrite.com/event/1643797643 05-July Tony Rogerson Achieving a throughput of 1.5Terabytes or over 92,000 8Kbyte of 100% random reads per second on kit costing less that 2.5K, and of course what to do with it! The session will focus on commodity kit and how it can be used within business to provide massive performance benefits at little cost. End to End Report Creation and Management using SQL Server Reporting Services  Chris Testa-O'NeillThis session will walk through the authoring, management and delivery of reports with a focus on the new features of Reporting Services 2008 R2. At the end of this session you will understand how to create a report in the new report designer. Be aware of the Report management options available and the delivery mechanisms that can be used to deliver reports. Register here for this event http://www.eventbrite.com/event/1643805667 Hope to see you at one or other ( or even both if you are that way inclined).

    Read the article

  • Reference Data Management and Master Data: Are Relation ?

    - by Mala Narasimharajan
    Submitted By:  Rahul Kamath  Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise Master Data Management (MDM) solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or restructuring sales territories to enable equitable distribution of leads to sales teams following the acquisition of new products, or adding additional cost centers to enable fine grain control over expenses. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? How does it relate to Master Data? Reference data is a close cousin of master data. While master data is challenged with problems of unique identification, may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and gives them contextual value. In fact, the creation of a new master data element may require new reference data to be created. For example, when a European company acquires a US business, chances are that they will now need to adapt their product line taxonomy to include a new category to describe the newly acquired US product line. Further, the cross-border transaction will also result in a revised geo hierarchy. The addition of new products represents changes to master data while changes to product categories and geo hierarchy are examples of reference data changes.1 The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Change Management: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change. References 1 Master Data versus Reference Data, Malcolm Chisholm, April 1, 2006.

    Read the article

  • How to decide on going into management?

    - by Rob Wells
    I read the transcript of a speech by Richard Hamming included as a part of this SO question and the speech had a quote that got me thinking about when someone should move into development. When your vision of what you want to do is what you can do single-handedly, then you should pursue it. The day your vision, what you think needs to be done, is bigger than what you can do single-handedly, then you have to move toward management. And the bigger the vision is, the farther in management you have to go. Any other suggestions as to how you can decide if you want to move away from the coal face and into management?

    Read the article

  • Tracking Outgoing Links With Google Analytics Events

    - by the_archer
    I've been trying to track clicks on external links on my website using the events tracking method. So I've got my Google Analytics code setup before body ends as shown below (note: quotes have been entitied by blogger, but it works fine): <script type='text/javascript'> var _gaq = _gaq || []; _gaq.push([&#39;_setAccount&#39;, &#39;UA-XXXXXXX-X#39;]); _gaq.push([&#39;_trackPageview&#39;]); (function() { var ga = document.createElement(&#39;script&#39;); ga.type = &#39;text/javascript&#39;; ga.async = true; ga.src = (&#39;https:&#39; == document.location.protocol ? &#39;https://ssl&#39; : &#39;http://www&#39;) + &#39;.google-analytics.com/ga.js&#39;; var s = document.getElementsByTagName(&#39;script&#39;)[0]; s.parentNode.insertBefore(ga, s); })(); </script> Now I wanted to track a link on the addthis.com follow widget. So there is a link of the type below to which following instructions from here I added the onclick event. <a addthis:url='http://feeds.feedburner.com/myfeedburnerlurl' onClick="_gaq.push(['_trackEvent', 'Subscription Clicks', 'RSS']);" class='addthis_button_rss_follow'/> I clicked on it a couple of times, left it for over a day now, but nothing shows up in google analytics events. It just says zero events. Here's a screenshot of the events page on GA: Could anybody help me? Am I doing anything wrong?

    Read the article

  • jQuery capture all changes to named inpt on a form

    - by Brian M. Hunt
    I'm trying to determine when any of a set of named input/select/radio/checked/hidden fields in a form change. In particular, I'd like to capture when any changes are made to fields matching jQuery's selector $("form :input"), and where that input is has a name attribute. However, the form isn't static i.e. some of the fields are dynamically added later. My initial thought is to keep track of when new named elements matching :input are added, and then add an event handler, like this: function on_change() { alert("The form element with name " + $(this).attr("name") + " has changed"); } function reg_new_e_handler(input_element) { input_element.change(on_change); } However, I'm quite hopeful I can avoid this with some jQuery magic. In particular, is there a way to register an event handler in jQuery that would handle input elements that match the following: $("form :input").filter( function () { $(this).attr("name") } ).change(on_change); However, have this event set update whenever new input elements are added. I've thought that it may be possible to capture keyup event on the form node with $("form").keyup(on_change), but I'm not so sure how one could capture change events. I'd also like this to capture keyup events. Thank you for reading. Brian

    Read the article

  • Five Key Strategies in Master Data Management

    - by david.butler(at)oracle.com
    Here is a very interesting Profit Magazine article on MDM: A recent customer survey reveals the deleterious effects of data fragmentation. by Trevor Naidoo, December 2010   Across industries and geographies, IT organizations have grown in complexity, whether due to mergers and acquisitions, or decentralized systems supporting functional or departmental requirements. With systems architected over time to support unique, one-off process needs, they are becoming costly to maintain, and the Internet has only further added to the complexity. Data fragmentation has become a key inhibitor in delivering flexible, user-friendly systems. The Oracle Insight team conducted a survey assessing customers' master data management (MDM) capabilities over the past two years to get a sense of where they are in terms of their capabilities. The responses, by 27 respondents from six different industries, reveal five key areas in which customers need to improve their data management in order to get better financial results. 1. Less than 15 percent of organizations surveyed understand the sources and quality of their master data, and have a roadmap to address missing data domains. Examples of the types of master data domains referred to are customer, supplier, product, financial and site. Many organizations have multiple sources of master data with varying degrees of data quality in each source -- customer data stored in the customer relationship management system is inconsistent with customer data stored in the order management system. Imagine not knowing how many places you stored your customer information, and whether a customer's address was the most up to date in each source. In fact, more than 55 percent of the respondents in the survey manage their data quality on an ad-hoc basis. It is important for organizations to document their inventory of data sources and then profile these data sources to ensure that there is a consistent definition of key data entities throughout the organization. Some questions to ask are: How do we define a customer? What is a product? How do we define a site? The goal is to strive for one common repository for master data that acts as a cross reference for all other sources and ensures consistent, high-quality master data throughout the organization. 2. Only 18 percent of respondents have an enterprise data management strategy to ensure that data is treated as an asset to the organization. Most respondents handle data at the department or functional level and do not have an enterprise view of their master data. The sales department may track all their interactions with customers as they move through the sales cycle, the service department is tracking their interactions with the same customers independently, and the finance department also has a different perspective on the same customer. The salesperson may not be aware that the customer she is trying to sell to is experiencing issues with existing products purchased, or that the customer is behind on previous invoices. The lack of a data strategy makes it difficult for business users to turn data into information via reports. Without the key building blocks in place, it is difficult to create key linkages between customer, product, site, supplier and financial data. These linkages make it possible to understand patterns. A well-defined data management strategy is aligned to the business strategy and helps create the governance needed to ensure that data stewardship is in place and data integrity is intact. 3. Almost 60 percent of respondents have no strategy to integrate data across operational applications. Many respondents have several disparate sources of data with no strategy to keep them in sync with each other. Even though there is no clear strategy to integrate the data (see #2 above), the data needs to be synced and cross-referenced to keep the business processes running. About 55 percent of respondents said they perform this integration on an ad hoc basis, and in many cases, it is done manually with the help of Microsoft Excel spreadsheets. For example, a salesperson needs a report on global sales for a specific product, but the product has different product numbers in different countries. Typically, an analyst will pull all the data into Excel, manually create a cross reference for that product, and then aggregate the sales. The exact same procedure has to be followed if the same report is needed the following month. A well-defined consolidation strategy will ensure that a central cross-reference is maintained with updates in any one application being propagated to all the other systems, so that data is synchronized and up to date. This can be done in real time or in batch mode using integration technology. 4. Approximately 50 percent of respondents spend manual efforts cleansing and normalizing data. Information stored in various systems usually follows different standards and formats, making it difficult to match the data. A customer's address can be stored in different ways using a variety of abbreviations -- for example, "av" or "ave" for avenue. Similarly, a product's attributes can be stored in a number of different ways; for example, a size attribute can be stored in inches and can also be entered as "'' ". These types of variations make it difficult to match up data from different sources. Today, most customers rely on manual, heroic efforts to match, cleanse, and de-duplicate data -- clearly not a scalable, sustainable model. To solve this challenge, organizations need the ability to standardize data for customers, products, sites, suppliers and financial accounts; however, less than 10 percent of respondents have technology in place to automatically resolve duplicates. It is no wonder, therefore, that we get communications about products we don't own, at addresses we don't reside, and using channels (like direct mail) we don't like. An all-too-common example of a potential challenge follows: Customers end up receiving duplicate communications, which not only impacts customer satisfaction, but also incurs additional mailing costs. Cleansing, normalizing, and standardizing data will help address most of these issues. 5. Only 10 percent of respondents have the ability to share data that was mastered in a master data hub. Close to 60 percent of respondents have efforts in place that profile, standardize and cleanse data manually, and the output of these efforts are stored in spreadsheets in various parts of the organization. This valuable information is not easily shared with the rest of the organization and, more importantly, this enriched information cannot be sent back to the source systems so that the data is fixed at the source. A key benefit of a master data management strategy is not only to clean the data, but to also share the data back to the source systems as well as other systems that need the information. Aside from the source systems, another key beneficiary of this data is the business intelligence system. Having clean master data as input to business intelligence systems provides more accurate and enhanced reporting.  Characteristics of Stellar MDM When deciding on the right master data management technology, organizations should look for solutions that have four main characteristics: enterprise-grade MDM performance complete technology that can be rapidly deployed and addresses multiple business issues end-to-end MDM process management with data quality monitoring and assurance pre-built MDM business relevant applications with data stores and workflows These master data management capabilities will aid in moving closer to a best-practice maturity level, delivering tremendous efficiencies and savings as well as revenue growth opportunities as a result of better understanding your customers.  Trevor Naidoo is a senior director in Industry Strategy and Insight at Oracle. 

    Read the article

  • Windows OS level file open event trigger

    - by john
    Colleagues, I have need to run a script/program on certain basic OS level events. In particular when a file in Windows is opened. The open may be read-only or to edit, and may be initiated by a number of means, either from windows explorer (open or ), be selected from a viewing or editing application from the native file chooser, or drag-n-drop into an editing or viewing application. Further, i need the trigger to "hold" the event from completing the action until the runtime on the program has completed. The event handler program may return a pass state, or fail state. If fail state has been returned, then the event must disallow the initially requested action. Lastly, I need to add to the file in question a property or attribute that will contain metadata that will be used by the above event trigger handler program to make a determination as to the pass/fail condition that will ultimately determine if the user is permitted to open the file. Please note that this is NOT a windows event log situation, but one at the OS level file open event. thanks very much for your help. regards, j

    Read the article

  • Patterns / Solutions to complicated Feature Management

    - by yclian
    Hi all, My company develops CDN / Web-Hosting solution. We have a middleware that's served as a business logic layer and exposes web service for the front-end. I would like to seek for a clean solution to feature management - there're uncertainties and ugly workarounds/solutions in the software that the dev would say "when it happens or is broken, we will fix it". For example, here're the following features that a web publisher can have: Sites limit Bandwidth limit SSL feature + SSL configuration per site If we downgrade a web publisher, when he's having 10 sites, down to 5 sites, we can choose not to suspend the rest of the 5 sites, or we shall prompt for suspension before the downgrade. For the case of bandwidth limit, the downgrade is easy, when the bandwidth check happens, if the publisher has it exceeded, then we will suspend his account. For the case of SSL feature. Every SSL configuration is tied to a site, what shall happen to these configuration object when the SSL feature is downgraded from enabled to disabled? So as you can see, there're many different situations and there are different ways of handling it. I can make a system that examines the impacts and prompts the user to make changes before the downgrade/upgrade. Or a system that ignores the impacts and just upgrade/downgrade. Bad. Or a system designed in a way that the client code need to be aware of the complex feature matrix (or I can expose a helper to the client code to check if a feature is not DEFUNCT) There can be many ways that I am still thinking but puzzled. I am wondering, how would you tackle this issue and is there any recommended patterns or books or software that you think I can refer to? Appreciate your help.

    Read the article

  • Join us for Live Oracle Linux and Oracle VM Cloud Events in Europe

    - by Monica Kumar
    Join us for a series of live events and discover how Oracle VM and Oracle Linux offer an integrated and optimized infrastructure for quickly deploying a private cloud environment at lower cost. As one of the most widely deployed operating systems today, Oracle Linux delivers higher performance, better reliability, and stability, at a lower cost for your cloud environments. Oracle VM is an application-driven server virtualization solution fully integrated and certified with Oracle applications to deliver rapid application deployment and simplified management. With Oracle VM, you have peace of mind that the entire Oracle stack deployed is fully certified by Oracle. Register now for any of the upcoming events, and meet with Oracle experts to discuss how we can help in enabling your private cloud. Nov 20: Foundation for the Cloud: Oracle Linux and Oracle VM (Belgium) Nov 21: Oracle Linux & Oracle VM Enabling Private Cloud (Germany) Nov 28: Realize Substantial Savings and Increased Efficiency with Oracle Linux and Oracle VM (Luxembourg) Nov 29: Foundation for the Cloud: Oracle Linux and Oracle VM (Netherlands) Dec 5: MySQL Tech Tour, including Oracle Linux and Oracle VM (France) Hope to see you at one of these events!

    Read the article

  • My events don't show up in the goal funnels or conversion funnels

    - by Amit Bens
    I have an event set up on a website and I'd like to track the effect this event has on conversion rate. The event seems to be working fine - I can see it on Top Events with all the labels, etc. But when going into Goal Flow and selecting 'Event Category' these events don't show up. I have this running for about a week. And I have made multiple checks to verify that I have events that triggered the conversion goal. Any clue about what I'm doing wrong?

    Read the article

  • One True Event Loop

    - by CyberShadow
    Simple programs that collect data from only one system need only one event loop. For example, Windows applications have the message loop, POSIX network programs usually have a select/epoll/etc. loop at their core, pure SDL games use SDL's event loop. But what if you need to collect events from several subsystems? Such as an SDL game which doesn't use SDL_net for networking. I can think of several solutions: Polling (ugh) Put each event loop in its own thread, and: Send messages to the main thread, which collects and processes the events, or Place the event-processing code of each thread in a critical section, so that the threads can wait for events asynchronously but process them synchronously Choose one subsystem for the main event loop, and pass events from other subsystems via that subsystem as custom messages (for example, the Windows message loop and custom messages, or a socket select() loop and passing events via a loopback connection). Option 2.1 is more interesting on platforms where message-passing is a well-developed threading primitive (e.g. in the D programming language), but 2.2 looks like the best option to me.

    Read the article

  • Finding "spare time" in a day from within a list of events

    - by MFB
    I have a list of events which is always sorted chronologically. The start time is always followed by the end time. Times are strings formatted as 'HHmmss'. // list of events var events = [ '010000', // start '013000', // end... '053000', '060000', '161500', '184500'] // desired output var spares = [ '000000', // start '010000', // end... '013000', '053000', '060000', '161500', '184500', '235959'] How can I programmatically create a new list of "spare time" from 000000 to 235959? PS I'm trying to do this in Javascript, but any conceptual answer or pseudo code would be helpful too.

    Read the article

  • Reach More Customers - Partner Event Publishing System Now Available!

    - by swalker
    The Partner Event Publishing Service is now available to partners that have completed at least one Specialization. If you are an active Gold, Platinum or Diamond OPN member having accomplished at least one Specialization then you can start publishing your standalone live events today, including both in-person and webinar activities. Events are published on both Oracle.com/events and on Oracle’s internal calendar. You benefit by reaching new potential customers and creating more brand awareness.  Reach more customers by publishing Partner led events on Oracle.com/events by filling out a simple spreadsheet and submitting them to [email protected]. Click here for more information, and click here to download the Partner Event Publishing Form spreadsheet.

    Read the article

  • Java Spotlight Episode 101: JavaOne 2012 Part 2 - Community Events

    - by Roger Brinkley
    An interview with Martijn Verberg on Adopt A JSR, Nichole Scott and John Yeary on Community, and Hellena O'Dell on the Oracle Musical Festival about community events and happenings at JavaOne 2012. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes Events Sep 30-Oct 4, JavaONE, San Francisco Oct 3-4, Java Embedded @ JavaONE, San Francisco Oct 15-17, JAX London Oct 30-Nov 1, Arm TechCon, Santa Clara Oct 22-23, Freescale Technology Forum - Japan, Tokyo Oct 31, JFall, Netherlands Nov 2-3, JMagreb, Morocco Nov 13-17, Devoxx, Belgium Feature InterviewAdopt a JSR Adopt a JSR Home Adopt OpenJDK Home LJC's Adopt a JSR jClarity - Java Performance Tuning for the Cloud Community Events at JavaOne User Groups at Oracle World and JavaOne To access the Java User Group content on Sunday, go to the content catalog for JavaOne and filter the search criteria to Sunday sessions Oracle Music Festival

    Read the article

  • Listening For and Raising Events in the BLL

    - by OneSource
    I'm working on a WinForms .Net Recording App and I have a RecordingMgr in my BLL to listen for new events captured by another class. I want to display the events in my UI and I'm stuck as to what's the best way to do this. I can think of a few scenarios to handle this but all of them seem sub-optimal: Listen for and handle Recorded Events in both the UI and in the RecordingMgr After receiving the event in the RecordingMgr, raise it again so that the UI can pick it up Create a variable in RecordingMgr (e.g., a BindingList) that the UI can bind to and update it when an Event is received Ditch the RecordingMgr and just put the event recording logic in the UI What's the best approach? Something above or something else?

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Tokyo Tyrant ulog / update log management.

    - by Nathan Milford
    I'm testing Tokyo Tyrant in a master-master setup and have found the ulog grows out of control and locks up the disk. At first I found the -ulim option useful and limited the logfile size, however it simply rolls over to a new log, leaving the old ones to clutter up the partition. I suppose I'll write a shell script that will delete ulogs older than X, once I find out how far back Tokyo Tyrant needs in the update log in order to failover. Does anyone have any experience with this Tokyo Tyrant? Do you have a feel (acknowledging that every install is different based on what is being stored) for the optimal ulog size vs how far back a Tokyo Tyrant instance needs to look in the ulog to assume master status? Thanks, nathan

    Read the article

  • SNMP based network discovery (switches), device (ports on switches) power management

    - by SaM
    In a enterprise network, what would be the right way to generate a list of switches (SNMP managed) Is it reasonable to ask the organization to supply a list such as this: Switch name IP Address of switch Location SNMP community strings Or are there standard ways to run discovery scans - UDP broadcasts? After having generated a repository such as the above; given a single switch, how to query it for the list of all devices attached to it? Finally, how to selectively power down/power up ports? (remotely - using SNMP) Platform is going to be .NET based (C#) and the library being used is SharpSNMP

    Read the article

  • Digital Asset Management, iPhoto / Aperture server... alternative

    - by Sisyphus
    Afternoon, Clients, 10 : All Apples running either Leopard or Snow Leopard Server : Snow Leopard server, (and I have a old Dell Poweredge 650 at home running Gentoo 2.6, if anybody as a Linux solution). The situation: I work in small design company with 8 people, at present we are looking to consolidate all our image files onto one location, at present we each use our preferred single user DAM solution, be it, Adobe Bridge, iPhoto/Aperture (some don't bother at all) The filetypes commonly used are .psd, .pdf, .eps, .tiff, .jpg and RAW image files. Ideally what is needed: Centralised on one server, but allows us to search via spotlight (not essential, but would be nice) Include searchable metadata information such as date, location, title Open-source or as low cost as possibly Allow simultaneous users to import files So far, I have looked at a few open source DAM, systems, such as Razuna, Gallery (not strictly DAM), ResourceSpace, Notre-DAM, while these are brilliant and open-source, they don't integrate as smoothly with the Desktop as iPhoto and aperture. For iPhoto and aperture, I have tried creating a Shared library on the server (a tad laggy), and also using a drive with no permissions, put a library and letting each client read from it, however if they want to put images onto the library only, it's only supports one user at a time writing to the library... Any ideas what could fulfill our needs? Or is it time to bite the bullet for FinalCut Server? Thanks in advance.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >