Search Results

Search found 3134 results on 126 pages for 'bill jobs'.

Page 34/126 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Who are the most important people in open-source software? [closed]

    - by poseid
    I am reading a book by Malcolm Gladwell on the circumstances of successful careers. The book argues that Bill Gates, Steve Jobs, Bill Joy and more succesful computer pioneers were born between 1950-1955, and did absolve around 10000 hours of practice before microcomputers became widely available in the 1970s and their fairy tale success story begins. As we are in the age of web 2.0 with new forms of databases and persuasive access to information, who are in your opinion the most succesful computer programmers or scientists of our times, when were they born and to which technologies they had access?

    Read the article

  • Python multiprocessing global variable updates not returned to parent

    - by user1459256
    I am trying to return values from subprocesses but these values are unfortunately unpicklable. So I used global variables in threads module with success but have not been able to retrieve updates done in subprocesses when using multiprocessing module. I hope I'm missing something. The results printed at the end are always the same as initial values given the vars dataDV03 and dataDV04. The subprocesses are updating these global variables but these global variables remain unchanged in the parent. import multiprocessing # NOT ABLE to get python to return values in passed variables. ants = ['DV03', 'DV04'] dataDV03 = ['', ''] dataDV04 = {'driver': '', 'status': ''} def getDV03CclDrivers(lib): # call global variable global dataDV03 dataDV03[1] = 1 dataDV03[0] = 0 # eval( 'CCL.' + lib + '.' + lib + '( "DV03" )' ) these are unpicklable instantiations def getDV04CclDrivers(lib, dataDV04): # pass global variable dataDV04['driver'] = 0 # eval( 'CCL.' + lib + '.' + lib + '( "DV04" )' ) if __name__ == "__main__": jobs = [] if 'DV03' in ants: j = multiprocessing.Process(target=getDV03CclDrivers, args=('LORR',)) jobs.append(j) if 'DV04' in ants: j = multiprocessing.Process(target=getDV04CclDrivers, args=('LORR', dataDV04)) jobs.append(j) for j in jobs: j.start() for j in jobs: j.join() print 'Results:\n' print 'DV03', dataDV03 print 'DV04', dataDV04 I cannot post to my question so will try to edit the original. Here is the object that is not picklable: In [1]: from CCL import LORR In [2]: lorr=LORR.LORR('DV20', None) In [3]: lorr Out[3]: <CCL.LORR.LORR instance at 0x94b188c> This is the error returned when I use a multiprocessing.Pool to return the instance back to the parent: Thread getCcl (('DV20', 'LORR'),) Process PoolWorker-1: Traceback (most recent call last): File "/alma/ACS-10.1/casa/lib/python2.6/multiprocessing/process.py", line 232, in _bootstrap self.run() File "/alma/ACS-10.1/casa/lib/python2.6/multiprocessing/process.py", line 88, in run self._target(*self._args, **self._kwargs) File "/alma/ACS-10.1/casa/lib/python2.6/multiprocessing/pool.py", line 71, in worker put((job, i, result)) File "/alma/ACS-10.1/casa/lib/python2.6/multiprocessing/queues.py", line 366, in put return send(obj) UnpickleableError: Cannot pickle <type 'thread.lock'> objects In [5]: dir(lorr) Out[5]: ['GET_AMBIENT_TEMPERATURE', 'GET_CAN_ERROR', 'GET_CAN_ERROR_COUNT', 'GET_CHANNEL_NUMBER', 'GET_COUNT_PER_C_OP', 'GET_COUNT_REMAINING_OP', 'GET_DCM_LOCKED', 'GET_EFC_125_MHZ', 'GET_EFC_COMB_LINE_PLL', 'GET_ERROR_CODE_LAST_CAN_ERROR', 'GET_INTERNAL_SLAVE_ERROR_CODE', 'GET_MAGNITUDE_CELSIUS_OP', 'GET_MAJOR_REV_LEVEL', 'GET_MINOR_REV_LEVEL', 'GET_MODULE_CODES_CDAY', 'GET_MODULE_CODES_CMONTH', 'GET_MODULE_CODES_DIG1', 'GET_MODULE_CODES_DIG2', 'GET_MODULE_CODES_DIG4', 'GET_MODULE_CODES_DIG6', 'GET_MODULE_CODES_SERIAL', 'GET_MODULE_CODES_VERSION_MAJOR', 'GET_MODULE_CODES_VERSION_MINOR', 'GET_MODULE_CODES_YEAR', 'GET_NODE_ADDRESS', 'GET_OPTICAL_POWER_OFF', 'GET_OUTPUT_125MHZ_LOCKED', 'GET_OUTPUT_2GHZ_LOCKED', 'GET_PATCH_LEVEL', 'GET_POWER_SUPPLY_12V_NOT_OK', 'GET_POWER_SUPPLY_15V_NOT_OK', 'GET_PROTOCOL_MAJOR_REV_LEVEL', 'GET_PROTOCOL_MINOR_REV_LEVEL', 'GET_PROTOCOL_PATCH_LEVEL', 'GET_PROTOCOL_REV_LEVEL', 'GET_PWR_125_MHZ', 'GET_PWR_25_MHZ', 'GET_PWR_2_GHZ', 'GET_READ_MODULE_CODES', 'GET_RX_OPT_PWR', 'GET_SERIAL_NUMBER', 'GET_SIGN_OP', 'GET_STATUS', 'GET_SW_REV_LEVEL', 'GET_TE_LENGTH', 'GET_TE_LONG_FLAG_SET', 'GET_TE_OFFSET_COUNTER', 'GET_TE_SHORT_FLAG_SET', 'GET_TRANS_NUM', 'GET_VDC_12', 'GET_VDC_15', 'GET_VDC_7', 'GET_VDC_MINUS_7', 'SET_CLEAR_FLAGS', 'SET_FPGA_LOGIC_RESET', 'SET_RESET_AMBSI', 'SET_RESET_DEVICE', 'SET_RESYNC_TE', 'STATUS', '_HardwareDevice__componentName', '_HardwareDevice__hw', '_HardwareDevice__stickyFlag', '_LORRBase__logger', '__del__', '__doc__', '__init__', '__module__', '_devices', 'clearDeviceCommunicationErrorAlarm', 'getControlList', 'getDeviceCommunicationErrorCounter', 'getErrorMessage', 'getHwState', 'getInternalSlaveCanErrorMsg', 'getLastCanErrorMsg', 'getMonitorList', 'hwConfigure', 'hwDiagnostic', 'hwInitialize', 'hwOperational', 'hwSimulation', 'hwStart', 'hwStop', 'inErrorState', 'isMonitoring', 'isSimulated'] In [6]:

    Read the article

  • rounded corners in Qooxdoo - problems with ImageMagic and PNG

    - by lomme47
    Hi, I want to create a button with rounded corners in Qooxdoo but I'm having some problems. I guess it's a problem with ImageMagick and not my Qooxdoo code, but I'll post it anyway. So in order to create rounded corners I'm following this guide Guide this is what my image.json contains: { "jobs" : { "common" : { "let" : { "RESPATH" : "source/resource/custom" }, "cache" : { "compile" : "../cache" } }, "image-clipping" : { "extend" : ["common"], "slice-images" : { "images" : { "${RESPATH}/image/source/groupBox.png" : { "prefix" : "../clipped/groupBox", "border-width" : 4 } } } }, "image-combine" : { "extend" : ["common"], "combine-images" : { "images" : { "${RESPATH}/image-combined/combined.png": { "prefix" : [ "${RESPATH}" ], "layout" : "vertical", "input" : [ { "prefix" : [ "${RESPATH}" ], "files" : [ "${RESPATH}/image/clipped/groupBox*.png" ] } ] } } } } } } Here's what happens when I run image-clipping and image-combine: C:\customgenerate.py -c image.json image-clipping INITIALIZING: CUSTOM Configuration: image.json Jobs: image-clipping Resolving config includes... Resolving jobs... Incorporating job defaults... Resolving macros... Resolving libs/manifests... EXECUTING: IMAGE-CLIPPING Initializing cache... Done C:\customgenerate.py -c image.json image-combine INITIALIZING: CUSTOM Configuration: image.json Jobs: image-combine Resolving config includes... Resolving jobs... Incorporating job defaults... Resolving macros... Resolving libs/manifests... EXECUTING: IMAGE-COMBINE Initializing cache... Combining images... Creating image C:\custom\source\resource\custom\image-combined\combined.png Magick: no decode delegate for this image format \docume~1\lomme\lokala~1\ tmpql73hk' @ error/constitute.c/ReadImage/532. Magick: missing an image filename C:\custom\source\resource\custom\image-combined\combined.png' @ error/montage.c/MontageImageCommand/1707. The montage command (montage -geometry +0+0 -gravity NorthWest -tile 1x -background None @c:\docume~1\lomme\lokala~1\temp\tmpql73hk C:\custom\source\resources\custom\image-combined\combined.png) failed with the following return code:1 The image-clipping works like a charm but I get some kinda error message when I try to run image-combine. When I google the error messages it says ImageMagick is lacking PNG support but I can use other commands like "convert a.jpg b.png" so there must be some kinda png support? here's what "identify -list format" returns: PNG* PNG rw- Portable Network Graphics (libpng 1.2.43) See http://www.libpng.org/ for details about the PNG format. PNG24* PNG rw- opaque 24-bit RGB (zlib 1.2.3) PNG32* PNG rw- opaque or transparent 32-bit RGBA PNG8* PNG rw- 8-bit indexed with optional binary transparency So why do i get this error message: Magick: no decode delegate for this image format Looks to me like there's png support? I've never used ImageMagick before so I'm completely lost :D Thanks in advance

    Read the article

  • Is there a Telecommunications Reference Architecture?

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Abstract   Reference architecture provides needed architectural information that can be provided in advance to an enterprise to enable consistent architectural best practices. Enterprise Reference Architecture helps business owners to actualize their strategies, vision, objectives, and principles. It evaluates the IT systems, based on Reference Architecture goals, principles, and standards. It helps to reduce IT costs by increasing functionality, availability, scalability, etc. Telecom Reference Architecture provides customers with the flexibility to view bundled service bills online with the provision of multiple services. It provides real-time, flexible billing and charging systems, to handle complex promotions, discounts, and settlements with multiple parties. This paper attempts to describe the Reference Architecture for the Telecom Enterprises. It lays the foundation for a Telecom Reference Architecture by articulating the requirements, drivers, and pitfalls for telecom service providers. It describes generic reference architecture for telecom enterprises and moves on to explain how to achieve Enterprise Reference Architecture by using SOA.   Introduction   A Reference Architecture provides a methodology, set of practices, template, and standards based on a set of successful solutions implemented earlier. These solutions have been generalized and structured for the depiction of both a logical and a physical architecture, based on the harvesting of a set of patterns that describe observations in a number of successful implementations. It helps as a reference for the various architectures that an enterprise can implement to solve various problems. It can be used as the starting point or the point of comparisons for various departments/business entities of a company, or for the various companies for an enterprise. It provides multiple views for multiple stakeholders.   Major artifacts of the Enterprise Reference Architecture are methodologies, standards, metadata, documents, design patterns, etc.   Purpose of Reference Architecture   In most cases, architects spend a lot of time researching, investigating, defining, and re-arguing architectural decisions. It is like reinventing the wheel as their peers in other organizations or even the same organization have already spent a lot of time and effort defining their own architectural practices. This prevents an organization from learning from its own experiences and applying that knowledge for increased effectiveness.   Reference architecture provides missing architectural information that can be provided in advance to project team members to enable consistent architectural best practices.   Enterprise Reference Architecture helps an enterprise to achieve the following at the abstract level:   ·       Reference architecture is more of a communication channel to an enterprise ·       Helps the business owners to accommodate to their strategies, vision, objectives, and principles. ·       Evaluates the IT systems based on Reference Architecture Principles ·       Reduces IT spending through increasing functionality, availability, scalability, etc ·       A Real-time Integration Model helps to reduce the latency of the data updates Is used to define a single source of Information ·       Provides a clear view on how to manage information and security ·       Defines the policy around the data ownership, product boundaries, etc. ·       Helps with cost optimization across project and solution portfolios by eliminating unused or duplicate investments and assets ·       Has a shorter implementation time and cost   Once the reference architecture is in place, the set of architectural principles, standards, reference models, and best practices ensure that the aligned investments have the greatest possible likelihood of success in both the near term and the long term (TCO).     Common pitfalls for Telecom Service Providers   Telecom Reference Architecture serves as the first step towards maturity for a telecom service provider. During the course of our assignments/experiences with telecom players, we have come across the following observations – Some of these indicate a lack of maturity of the telecom service provider:   ·       In markets that are growing and not so mature, it has been observed that telcos have a significant amount of in-house or home-grown applications. In some of these markets, the growth has been so rapid that IT has been unable to cope with business demands. Telcos have shown a tendency to come up with workarounds in their IT applications so as to meet business needs. ·       Even for core functions like provisioning or mediation, some telcos have tried to manage with home-grown applications. ·       Most of the applications do not have the required scalability or maintainability to sustain growth in volumes or functionality. ·       Applications face interoperability issues with other applications in the operator's landscape. Integrating a new application or network element requires considerable effort on the part of the other applications. ·       Application boundaries are not clear, and functionality that is not in the initial scope of that application gets pushed onto it. This results in the development of the multiple, small applications without proper boundaries. ·       Usage of Legacy OSS/BSS systems, poor Integration across Multiple COTS Products and Internal Systems. Most of the Integrations are developed on ad-hoc basis and Point-to-Point Integration. ·       Redundancy of the business functions in different applications • Fragmented data across the different applications and no integrated view of the strategic data • Lot of performance Issues due to the usage of the complex integration across OSS and BSS systems   However, this is where the maturity of the telecom industry as a whole can be of help. The collaborative efforts of telcos to overcome some of these problems have resulted in bodies like the TM Forum. They have come up with frameworks for business processes, data, applications, and technology for telecom service providers. These could be a good starting point for telcos to clean up their enterprise landscape.   Industry Trends in Telecom Reference Architecture   Telecom reference architectures are evolving rapidly because telcos are facing business and IT challenges.   “The reality is that there probably is no killer application, no silver bullet that the telcos can latch onto to carry them into a 21st Century.... Instead, there are probably hundreds – perhaps thousands – of niche applications.... And the only way to find which of these works for you is to try out lots of them, ramp up the ones that work, and discontinue the ones that fail.” – Martin Creaner President & CTO TM Forum.   The following trends have been observed in telecom reference architecture:   ·       Transformation of business structures to align with customer requirements ·       Adoption of more Internet-like technical architectures. The Web 2.0 concept is increasingly being used. ·       Virtualization of the traditional operations support system (OSS) ·       Adoption of SOA to support development of IP-based services ·       Adoption of frameworks like Service Delivery Platforms (SDPs) and IP Multimedia Subsystem ·       (IMS) to enable seamless deployment of various services over fixed and mobile networks ·       Replacement of in-house, customized, and stove-piped OSS/BSS with standards-based COTS products ·       Compliance with industry standards and frameworks like eTOM, SID, and TAM to enable seamless integration with other standards-based products   Drivers of Reference Architecture   The drivers of the Reference Architecture are Reference Architecture Goals, Principles, and Enterprise Vision and Telecom Transformation. The details are depicted below diagram. @font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }div.Section1 { page: Section1; } Figure 1. Drivers for Reference Architecture @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Today’s telecom reference architectures should seamlessly integrate traditional legacy-based applications and transition to next-generation network technologies (e.g., IP multimedia subsystems). This has resulted in new requirements for flexible, real-time billing and OSS/BSS systems and implications on the service provider’s organizational requirements and structure.   Telecom reference architectures are today expected to:   ·       Integrate voice, messaging, email and other VAS over fixed and mobile networks, back end systems ·       Be able to provision multiple services and service bundles • Deliver converged voice, video and data services ·       Leverage the existing Network Infrastructure ·       Provide real-time, flexible billing and charging systems to handle complex promotions, discounts, and settlements with multiple parties. ·       Support charging of advanced data services such as VoIP, On-Demand, Services (e.g.  Video), IMS/SIP Services, Mobile Money, Content Services and IPTV. ·       Help in faster deployment of new services • Serve as an effective platform for collaboration between network IT and business organizations ·       Harness the potential of converging technology, networks, devices and content to develop multimedia services and solutions of ever-increasing sophistication on a single Internet Protocol (IP) ·       Ensure better service delivery and zero revenue leakage through real-time balance and credit management ·       Lower operating costs to drive profitability   Enterprise Reference Architecture   The Enterprise Reference Architecture (RA) fills the gap between the concepts and vocabulary defined by the reference model and the implementation. Reference architecture provides detailed architectural information in a common format such that solutions can be repeatedly designed and deployed in a consistent, high-quality, supportable fashion. This paper attempts to describe the Reference Architecture for the Telecom Application Usage and how to achieve the Enterprise Level Reference Architecture using SOA.   • Telecom Reference Architecture • Enterprise SOA based Reference Architecture   Telecom Reference Architecture   Tele Management Forum’s New Generation Operations Systems and Software (NGOSS) is an architectural framework for organizing, integrating, and implementing telecom systems. NGOSS is a component-based framework consisting of the following elements:   ·       The enhanced Telecom Operations Map (eTOM) is a business process framework. ·       The Shared Information Data (SID) model provides a comprehensive information framework that may be specialized for the needs of a particular organization. ·       The Telecom Application Map (TAM) is an application framework to depict the functional footprint of applications, relative to the horizontal processes within eTOM. ·       The Technology Neutral Architecture (TNA) is an integrated framework. TNA is an architecture that is sustainable through technology changes.   NGOSS Architecture Standards are:   ·       Centralized data ·       Loosely coupled distributed systems ·       Application components/re-use  ·       A technology-neutral system framework with technology specific implementations ·       Interoperability to service provider data/processes ·       Allows more re-use of business components across multiple business scenarios ·       Workflow automation   The traditional operator systems architecture consists of four layers,   ·       Business Support System (BSS) layer, with focus toward customers and business partners. Manages order, subscriber, pricing, rating, and billing information. ·       Operations Support System (OSS) layer, built around product, service, and resource inventories. ·       Networks layer – consists of Network elements and 3rd Party Systems. ·       Integration Layer – to maximize application communication and overall solution flexibility.   Reference architecture for telecom enterprises is depicted below. @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Figure 2. Telecom Reference Architecture   The major building blocks of any Telecom Service Provider architecture are as follows:   1. Customer Relationship Management   CRM encompasses the end-to-end lifecycle of the customer: customer initiation/acquisition, sales, ordering, and service activation, customer care and support, proactive campaigns, cross sell/up sell, and retention/loyalty.   CRM also includes the collection of customer information and its application to personalize, customize, and integrate delivery of service to a customer, as well as to identify opportunities for increasing the value of the customer to the enterprise.   The key functionalities related to Customer Relationship Management are   ·       Manage the end-to-end lifecycle of a customer request for products. ·       Create and manage customer profiles. ·       Manage all interactions with customers – inquiries, requests, and responses. ·       Provide updates to Billing and other south bound systems on customer/account related updates such as customer/ account creation, deletion, modification, request bills, final bill, duplicate bills, credit limits through Middleware. ·       Work with Order Management System, Product, and Service Management components within CRM. ·       Manage customer preferences – Involve all the touch points and channels to the customer, including contact center, retail stores, dealers, self service, and field service, as well as via any media (phone, face to face, web, mobile device, chat, email, SMS, mail, the customer's bill, etc.). ·       Support single interface for customer contact details, preferences, account details, offers, customer premise equipment, bill details, bill cycle details, and customer interactions.   CRM applications interact with customers through customer touch points like portals, point-of-sale terminals, interactive voice response systems, etc. The requests by customers are sent via fulfillment/provisioning to billing system for ordering processing.   2. Billing and Revenue Management   Billing and Revenue Management handles the collection of appropriate usage records and production of timely and accurate bills – for providing pre-bill usage information and billing to customers; for processing their payments; and for performing payment collections. In addition, it handles customer inquiries about bills, provides billing inquiry status, and is responsible for resolving billing problems to the customer's satisfaction in a timely manner. This process grouping also supports prepayment for services.   The key functionalities provided by these applications are   ·       To ensure that enterprise revenue is billed and invoices delivered appropriately to customers. ·       To manage customers’ billing accounts, process their payments, perform payment collections, and monitor the status of the account balance. ·       To ensure the timely and effective fulfillment of all customer bill inquiries and complaints. ·       Collect the usage records from mediation and ensure appropriate rating and discounting of all usage and pricing. ·       Support revenue sharing; split charging where usage is guided to an account different from the service consumer. ·       Support prepaid and post-paid rating. ·       Send notification on approach / exceeding the usage thresholds as enforced by the subscribed offer, and / or as setup by the customer. ·       Support prepaid, post paid, and hybrid (where some services are prepaid and the rest of the services post paid) customers and conversion from post paid to prepaid, and vice versa. ·       Support different billing function requirements like charge prorating, promotion, discount, adjustment, waiver, write-off, account receivable, GL Interface, late payment fee, credit control, dunning, account or service suspension, re-activation, expiry, termination, contract violation penalty, etc. ·       Initiate direct debit to collect payment against an invoice outstanding. ·       Send notification to Middleware on different events; for example, payment receipt, pre-suspension, threshold exceed, etc.   Billing systems typically get usage data from mediation systems for rating and billing. They get provisioning requests from order management systems and inquiries from CRM systems. Convergent and real-time billing systems can directly get usage details from network elements.   3. Mediation   Mediation systems transform/translate the Raw or Native Usage Data Records into a general format that is acceptable to billing for their rating purposes.   The following lists the high-level roles and responsibilities executed by the Mediation system in the end-to-end solution.   ·       Collect Usage Data Records from different data sources – like network elements, routers, servers – via different protocol and interfaces. ·       Process Usage Data Records – Mediation will process Usage Data Records as per the source format. ·       Validate Usage Data Records from each source. ·       Segregates Usage Data Records coming from each source to multiple, based on the segregation requirement of end Application. ·       Aggregates Usage Data Records based on the aggregation rule if any from different sources. ·       Consolidates multiple Usage Data Records from each source. ·       Delivers formatted Usage Data Records to different end application like Billing, Interconnect, Fraud Management, etc. ·       Generates audit trail for incoming Usage Data Records and keeps track of all the Usage Data Records at various stages of mediation process. ·       Checks duplicate Usage Data Records across files for a given time window.   4. Fulfillment   This area is responsible for providing customers with their requested products in a timely and correct manner. It translates the customer's business or personal need into a solution that can be delivered using the specific products in the enterprise's portfolio. This process informs the customers of the status of their purchase order, and ensures completion on time, as well as ensuring a delighted customer. These processes are responsible for accepting and issuing orders. They deal with pre-order feasibility determination, credit authorization, order issuance, order status and tracking, customer update on customer order activities, and customer notification on order completion. Order management and provisioning applications fall into this category.   The key functionalities provided by these applications are   ·       Issuing new customer orders, modifying open customer orders, or canceling open customer orders; ·       Verifying whether specific non-standard offerings sought by customers are feasible and supportable; ·       Checking the credit worthiness of customers as part of the customer order process; ·       Testing the completed offering to ensure it is working correctly; ·       Updating of the Customer Inventory Database to reflect that the specific product offering has been allocated, modified, or cancelled; ·       Assigning and tracking customer provisioning activities; ·       Managing customer provisioning jeopardy conditions; and ·       Reporting progress on customer orders and other processes to customer.   These applications typically get orders from CRM systems. They interact with network elements and billing systems for fulfillment of orders.   5. Enterprise Management   This process area includes those processes that manage enterprise-wide activities and needs, or have application within the enterprise as a whole. They encompass all business management processes that   ·       Are necessary to support the whole of the enterprise, including processes for financial management, legal management, regulatory management, process, cost, and quality management, etc.;   ·       Are responsible for setting corporate policies, strategies, and directions, and for providing guidelines and targets for the whole of the business, including strategy development and planning for areas, such as Enterprise Architecture, that are integral to the direction and development of the business;   ·       Occur throughout the enterprise, including processes for project management, performance assessments, cost assessments, etc.     (i) Enterprise Risk Management:   Enterprise Risk Management focuses on assuring that risks and threats to the enterprise value and/or reputation are identified, and appropriate controls are in place to minimize or eliminate the identified risks. The identified risks may be physical or logical/virtual. Successful risk management ensures that the enterprise can support its mission critical operations, processes, applications, and communications in the face of serious incidents such as security threats/violations and fraud attempts. Two key areas covered in Risk Management by telecom operators are:   ·       Revenue Assurance: Revenue assurance system will be responsible for identifying revenue loss scenarios across components/systems, and will help in rectifying the problems. The following lists the high-level roles and responsibilities executed by the Revenue Assurance system in the end-to-end solution. o   Identify all usage information dropped when networks are being upgraded. o   Interconnect bill verification. o   Identify where services are routinely provisioned but never billed. o   Identify poor sales policies that are intensifying collections problems. o   Find leakage where usage is sent to error bucket and never billed for. o   Find leakage where field service, CRM, and network build-out are not optimized.   ·       Fraud Management: Involves collecting data from different systems to identify abnormalities in traffic patterns, usage patterns, and subscription patterns to report suspicious activity that might suggest fraudulent usage of resources, resulting in revenue losses to the operator.   The key roles and responsibilities of the system component are as follows:   o   Fraud management system will capture and monitor high usage (over a certain threshold) in terms of duration, value, and number of calls for each subscriber. The threshold for each subscriber is decided by the system and fixed automatically. o   Fraud management will be able to detect the unauthorized access to services for certain subscribers. These subscribers may have been provided unauthorized services by employees. The component will raise the alert to the operator the very first time of such illegal calls or calls which are not billed. o   The solution will be to have an alarm management system that will deliver alarms to the operator/provider whenever it detects a fraud, thus minimizing fraud by catching it the first time it occurs. o   The Fraud Management system will be capable of interfacing with switches, mediation systems, and billing systems   (ii) Knowledge Management   This process focuses on knowledge management, technology research within the enterprise, and the evaluation of potential technology acquisitions.   Key responsibilities of knowledge base management are to   ·       Maintain knowledge base – Creation and updating of knowledge base on ongoing basis. ·       Search knowledge base – Search of knowledge base on keywords or category browse ·       Maintain metadata – Management of metadata on knowledge base to ensure effective management and search. ·       Run report generator. ·       Provide content – Add content to the knowledge base, e.g., user guides, operational manual, etc.   (iii) Document Management   It focuses on maintaining a repository of all electronic documents or images of paper documents relevant to the enterprise using a system.   (iv) Data Management   It manages data as a valuable resource for any enterprise. For telecom enterprises, the typical areas covered are Master Data Management, Data Warehousing, and Business Intelligence. It is also responsible for data governance, security, quality, and database management.   Key responsibilities of Data Management are   ·       Using ETL, extract the data from CRM, Billing, web content, ERP, campaign management, financial, network operations, asset management info, customer contact data, customer measures, benchmarks, process data, e.g., process inputs, outputs, and measures, into Enterprise Data Warehouse. ·       Management of data traceability with source, data related business rules/decisions, data quality, data cleansing data reconciliation, competitors data – storage for all the enterprise data (customer profiles, products, offers, revenues, etc.) ·       Get online update through night time replication or physical backup process at regular frequency. ·       Provide the data access to business intelligence and other systems for their analysis, report generation, and use.   (v) Business Intelligence   It uses the Enterprise Data to provide the various analysis and reports that contain prospects and analytics for customer retention, acquisition of new customers due to the offers, and SLAs. It will generate right and optimized plans – bolt-ons for the customers.   The following lists the high-level roles and responsibilities executed by the Business Intelligence system at the Enterprise Level:   ·       It will do Pattern analysis and reports problem. ·       It will do Data Analysis – Statistical analysis, data profiling, affinity analysis of data, customer segment wise usage patterns on offers, products, service and revenue generation against services and customer segments. ·       It will do Performance (business, system, and forecast) analysis, churn propensity, response time, and SLAs analysis. ·       It will support for online and offline analysis, and report drill down capability. ·       It will collect, store, and report various SLA data. ·       It will provide the necessary intelligence for marketing and working on campaigns, etc., with cost benefit analysis and predictions.   It will advise on customer promotions with additional services based on loyalty and credit history of customer   ·       It will Interface with Enterprise Data Management system for data to run reports and analysis tasks. It will interface with the campaign schedules, based on historical success evidence.   (vi) Stakeholder and External Relations Management   It manages the enterprise's relationship with stakeholders and outside entities. Stakeholders include shareholders, employee organizations, etc. Outside entities include regulators, local community, and unions. Some of the processes within this grouping are Shareholder Relations, External Affairs, Labor Relations, and Public Relations.   (vii) Enterprise Resource Planning   It is used to manage internal and external resources, including tangible assets, financial resources, materials, and human resources. Its purpose is to facilitate the flow of information between all business functions inside the boundaries of the enterprise and manage the connections to outside stakeholders. ERP systems consolidate all business operations into a uniform and enterprise wide system environment.   The key roles and responsibilities for Enterprise System are given below:   ·        It will handle responsibilities such as core accounting, financial, and management reporting. ·       It will interface with CRM for capturing customer account and details. ·       It will interface with billing to capture the billing revenue and other financial data. ·       It will be responsible for executing the dunning process. Billing will send the required feed to ERP for execution of dunning. ·       It will interface with the CRM and Billing through batch interfaces. Enterprise management systems are like horizontals in the enterprise and typically interact with all major telecom systems. E.g., an ERP system interacts with CRM, Fulfillment, and Billing systems for different kinds of data exchanges.   6. External Interfaces/Touch Points   The typical external parties are customers, suppliers/partners, employees, shareholders, and other stakeholders. External interactions from/to a Service Provider to other parties can be achieved by a variety of mechanisms, including:   ·       Exchange of emails or faxes ·       Call Centers ·       Web Portals ·       Business-to-Business (B2B) automated transactions   These applications provide an Internet technology driven interface to external parties to undertake a variety of business functions directly for themselves. These can provide fully or partially automated service to external parties through various touch points.   Typical characteristics of these touch points are   ·       Pre-integrated self-service system, including stand-alone web framework or integration front end with a portal engine ·       Self services layer exposing atomic web services/APIs for reuse by multiple systems across the architectural environment ·       Portlets driven connectivity exposing data and services interoperability through a portal engine or web application   These touch points mostly interact with the CRM systems for requests, inquiries, and responses.   7. Middleware   The component will be primarily responsible for integrating the different systems components under a common platform. It should provide a Standards-Based Platform for building Service Oriented Architecture and Composite Applications. The following lists the high-level roles and responsibilities executed by the Middleware component in the end-to-end solution.   ·       As an integration framework, covering to and fro interfaces ·       Provide a web service framework with service registry. ·       Support SOA framework with SOA service registry. ·       Each of the interfaces from / to Middleware to other components would handle data transformation, translation, and mapping of data points. ·       Receive data from the caller / activate and/or forward the data to the recipient system in XML format. ·       Use standard XML for data exchange. ·       Provide the response back to the service/call initiator. ·       Provide a tracking until the response completion. ·       Keep a store transitional data against each call/transaction. ·       Interface through Middleware to get any information that is possible and allowed from the existing systems to enterprise systems; e.g., customer profile and customer history, etc. ·       Provide the data in a common unified format to the SOA calls across systems, and follow the Enterprise Architecture directive. ·       Provide an audit trail for all transactions being handled by the component.   8. Network Elements   The term Network Element means a facility or equipment used in the provision of a telecommunications service. Such terms also includes features, functions, and capabilities that are provided by means of such facility or equipment, including subscriber numbers, databases, signaling systems, and information sufficient for billing and collection or used in the transmission, routing, or other provision of a telecommunications service.   Typical network elements in a GSM network are Home Location Register (HLR), Intelligent Network (IN), Mobile Switching Center (MSC), SMS Center (SMSC), and network elements for other value added services like Push-to-talk (PTT), Ring Back Tone (RBT), etc.   Network elements are invoked when subscribers use their telecom devices for any kind of usage. These elements generate usage data and pass it on to downstream systems like mediation and billing system for rating and billing. They also integrate with provisioning systems for order/service fulfillment.   9. 3rd Party Applications   3rd Party systems are applications like content providers, payment gateways, point of sale terminals, and databases/applications maintained by the Government.   Depending on applicability and the type of functionality provided by 3rd party applications, the integration with different telecom systems like CRM, provisioning, and billing will be done.   10. Service Delivery Platform   A service delivery platform (SDP) provides the architecture for the rapid deployment, provisioning, execution, management, and billing of value added telecom services. SDPs are based on the concept of SOA and layered architecture. They support the delivery of voice, data services, and content in network and device-independent fashion. They allow application developers to aggregate network capabilities, services, and sources of content. SDPs typically contain layers for web services exposure, service application development, and network abstraction.   SOA Reference Architecture   SOA concept is based on the principle of developing reusable business service and building applications by composing those services, instead of building monolithic applications in silos. It’s about bridging the gap between business and IT through a set of business-aligned IT services, using a set of design principles, patterns, and techniques.   In an SOA, resources are made available to participants in a value net, enterprise, line of business (typically spanning multiple applications within an enterprise or across multiple enterprises). It consists of a set of business-aligned IT services that collectively fulfill an organization’s business processes and goals. We can choreograph these services into composite applications and invoke them through standard protocols. SOA, apart from agility and reusability, enables:   ·       The business to specify processes as orchestrations of reusable services ·       Technology agnostic business design, with technology hidden behind service interface ·       A contractual-like interaction between business and IT, based on service SLAs ·       Accountability and governance, better aligned to business services ·       Applications interconnections untangling by allowing access only through service interfaces, reducing the daunting side effects of change ·       Reduced pressure to replace legacy and extended lifetime for legacy applications, through encapsulation in services   ·       A Cloud Computing paradigm, using web services technologies, that makes possible service outsourcing on an on-demand, utility-like, pay-per-usage basis   The following section represents the Reference Architecture of logical view for the Telecom Solution. The new custom built application needs to align with this logical architecture in the long run to achieve EA benefits.   Packaged implementation applications, such as ERP billing applications, need to expose their functions as service providers (as other applications consume) and interact with other applications as service consumers.   COT applications need to expose services through wrappers such as adapters to utilize existing resources and at the same time achieve Enterprise Architecture goal and objectives.   The following are the various layers for Enterprise level deployment of SOA. This diagram captures the abstract view of Enterprise SOA layers and important components of each layer. Layered architecture means decomposition of services such that most interactions occur between adjacent layers. However, there is no strict rule that top layers should not directly communicate with bottom layers.   The diagram below represents the important logical pieces that would result from overall SOA transformation. @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Figure 3. Enterprise SOA Reference Architecture 1.          Operational System Layer: This layer consists of all packaged applications like CRM, ERP, custom built applications, COTS based applications like Billing, Revenue Management, Fulfilment, and the Enterprise databases that are essential and contribute directly or indirectly to the Enterprise OSS/BSS Transformation.   ERP holds the data of Asset Lifecycle Management, Supply Chain, and Advanced Procurement and Human Capital Management, etc.   CRM holds the data related to Order, Sales, and Marketing, Customer Care, Partner Relationship Management, Loyalty, etc.   Content Management handles Enterprise Search and Query. Billing application consists of the following components:   ·       Collections Management, Customer Billing Management, Invoices, Real-Time Rating, Discounting, and Applying of Charges ·       Enterprise databases will hold both the application and service data, whether structured or unstructured.   MDM - Master data majorly consists of Customer, Order, Product, and Service Data.     2.          Enterprise Component Layer:   This layer consists of the Application Services and Common Services that are responsible for realizing the functionality and maintaining the QoS of the exposed services. This layer uses container-based technologies such as application servers to implement the components, workload management, high availability, and load balancing.   Application Services: This Service Layer enables application, technology, and database abstraction so that the complex accessing logic is hidden from the other service layers. This is a basic service layer, which exposes application functionalities and data as reusable services. The three types of the Application access services are:   ·       Application Access Service: This Service Layer exposes application level functionalities as a reusable service between BSS to BSS and BSS to OSS integration. This layer is enabled using disparate technology such as Web Service, Integration Servers, and Adaptors, etc.   ·       Data Access Service: This Service Layer exposes application data services as a reusable reference data service. This is done via direct interaction with application data. and provides the federated query.   ·       Network Access Service: This Service Layer exposes provisioning layer as a reusable service from OSS to OSS integration. This integration service emphasizes the need for high performance, stateless process flows, and distributed design.   Common Services encompasses management of structured, semi-structured, and unstructured data such as information services, portal services, interaction services, infrastructure services, and security services, etc.   3.          Integration Layer:   This consists of service infrastructure components like service bus, service gateway for partner integration, service registry, service repository, and BPEL processor. Service bus will carry the service invocation payloads/messages between consumers and providers. The other important functions expected from it are itinerary based routing, distributed caching of routing information, transformations, and all qualities of service for messaging-like reliability, scalability, and availability, etc. Service registry will hold all contracts (wsdl) of services, and it helps developers to locate or discover service during design time or runtime.   • BPEL processor would be useful in orchestrating the services to compose a complex business scenario or process. • Workflow and business rules management are also required to support manual triggering of certain activities within business process. based on the rules setup and also the state machine information. Application, data, and service mediation layer typically forms the overall composite application development framework or SOA Framework.   4.          Business Process Layer: These are typically the intermediate services layer and represent Shared Business Process Services. At Enterprise Level, these services are from Customer Management, Order Management, Billing, Finance, and Asset Management application domains.   5.          Access Layer: This layer consists of portals for Enterprise and provides a single view of Enterprise information management and dashboard services.   6.          Channel Layer: This consists of various devices; applications that form part of extended enterprise; browsers through which users access the applications.   7.          Client Layer: This designates the different types of users accessing the enterprise applications. The type of user typically would be an important factor in determining the level of access to applications.   8.          Vertical pieces like management, monitoring, security, and development cut across all horizontal layers Management and monitoring involves all aspects of SOA-like services, SLAs, and other QoS lifecycle processes for both applications and services surrounding SOA governance.     9.          EA Governance, Reference Architecture, Roadmap, Principles, and Best Practices:   EA Governance is important in terms of providing the overall direction to SOA implementation within the enterprise. This involves board-level involvement, in addition to business and IT executives. At a high level, this involves managing the SOA projects implementation, managing SOA infrastructure, and controlling the entire effort through all fine-tuned IT processes in accordance with COBIT (Control Objectives for Information Technology).   Devising tools and techniques to promote reuse culture, and the SOA way of doing things needs competency centers to be established in addition to training the workforce to take up new roles that are suited to SOA journey.   Conclusions   Reference Architectures can serve as the basis for disparate architecture efforts throughout the organization, even if they use different tools and technologies. Reference architectures provide best practices and approaches in the independent way a vendor deals with technology and standards. Reference Architectures model the abstract architectural elements for an enterprise independent of the technologies, protocols, and products that are used to implement an SOA. Telecom enterprises today are facing significant business and technology challenges due to growing competition, a multitude of services, and convergence. Adopting architectural best practices could go a long way in meeting these challenges. The use of SOA-based architecture for communication to each of the external systems like Billing, CRM, etc., in OSS/BSS system has made the architecture very loosely coupled, with greater flexibility. Any change in the external systems would be absorbed at the Integration Layer without affecting the rest of the ecosystem. The use of a Business Process Management (BPM) tool makes the management and maintenance of the business processes easy, with better performance in terms of lead time, quality, and cost. Since the Architecture is based on standards, it will lower the cost of deploying and managing OSS/BSS applications over their lifecycles.

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • BIP BIServer Query Debug

    - by Tim Dexter
    With some help from Bryan, I have uncovered a way of being able to debug or at least log what BIServer is doing when BIP sends it a query request. This is not for those of you querying the database directly but if you are using the BIServer and its datamodel to fetch data for a BIP report. If you have written or used the query builder against BIServer and when you run the report it chokes with a cryptic message, that you have no clue about, read on. When BIP runs a piece of BIServer logical SQL to fetch data. It does not appear to validate it, it just passes it through, so what is BIServer doing on its end? As you may know, you are not writing regular physical sql its actually logical sql e.g. select Jobs."Job Title" as "Job Title", Employees."Last Name" as "Last Name", Employees.Salary as Salary, Locations."Department Name" as "Department Name", Locations."Country Name" as "Country Name", Locations."Region Name" as "Region Name" from HR.Locations Locations, HR.Employees Employees, HR.Jobs Jobs The tables might not even be a physical tables, we don't care, that's what the BIServer and its model are for. You have put all the effort into building the model, just go get me the data from where ever it might be. The BIServer takes the logical sql and uses its vast brain to work out what the physical SQL is, executes it and passes the result back to BIP. select distinct T32556.JOB_TITLE as c1, T32543.LAST_NAME as c2, T32543.SALARY as c3, T32537.DEPARTMENT_NAME as c4, T32532.COUNTRY_NAME as c5, T32577.REGION_NAME as c6 from JOBS T32556, REGIONS T32577, COUNTRIES T32532, LOCATIONS T32569, DEPARTMENTS T32537, EMPLOYEES T32543 where ( T32532.COUNTRY_ID = T32569.COUNTRY_ID and T32532.REGION_ID = T32577.REGION_ID and T32537.DEPARTMENT_ID = T32543.DEPARTMENT_ID and T32537.LOCATION_ID = T32569.LOCATION_ID and T32543.JOB_ID = T32556.JOB_ID ) Not a very tough example I know but you get the idea. How do I know what the BIServer is up to? How can I find out what the issue might be if BIServer chokes on my query? There are a couple of steps: In the Administrator tool you need to set the logging level for the Administrator user to something greater than the default '0'. '7' is going to give you the max. Just remember to take it back down after you have finished the debug. I needed to bounce my BIServer service Now here's the secret sauce. Prefix the following to your BIP query set variable LOGLEVEL = 7; Set the log level to that you have in the admin tool Now run your BIP report. With the prefix in place; BIServer will write to the NQQuery.log file. This is located in the ./OracleBI/server/Log directory. In there you are going to find the complete process the BIServer has gone through to try and get the data back for you A quick note, if the BIServer can, its going to hit that great BIEE cache to get your data and you may not see the full log. IF this is the case. Get inot hte Administration page (via the browser login) and clear out your BIP report cursor. Then re-run. This will hopefully help out if you are trying to debug that annoying BIP report that will not run or is getting some strange data. Don't forget to turn that logging level back down once you are done. This will avoid the DBA screaming at you for sucking up all the disk space on the system.

    Read the article

  • Configuring Jenkins for running with BitBucket

    - by Claus
    I'm trying to setup Jenkins on my mac mini in order to pull my iOS project source code from BitBucket and build it automatically. I've already gone through the major well know problems generating the ssh keys,uploading them in BitBucket,performing an ssh connection by console for adding the host to the well know list (you can find all my adventure here and here). Now,there are 3 user in my system: A,B and Shared. When I installed Jenkins it automatically placed itself in Shared, but I generated the ssh keys with the user A. So just to be clear In the A home directory there is an .ssh directory with public and private keys. When I try to run by Jenkins job I get this error message: Started by user anonymous Building in workspace /Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace Checkout:workspace / /Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace - hudson.remoting.LocalChannel@625cb0bb Using strategy: Default Cloning the remote Git repository Cloning repository [email protected]:myuser/myproject.git git --version git version 1.8.0 ERROR: Error cloning remote repo 'origin' : Could not clone [email protected]:myuser/myproject.git hudson.plugins.git.GitException: Could not clone [email protected]:myuser/myproject.git at hudson.plugins.git.GitAPI.clone(GitAPI.java:271) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1036) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:978) at hudson.FilePath.act(FilePath.java:851) at hudson.FilePath.act(FilePath.java:824) at hudson.plugins.git.GitSCM.determineRevisionToBuild(GitSCM.java:978) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1134) at hudson.model.AbstractProject.checkout(AbstractProject.java:1325) at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:676) at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:581) at hudson.model.Run.execute(Run.java:1516) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:236) Caused by: hudson.plugins.git.GitException: Command "/usr/local/git/bin/git clone --progress -o origin [email protected]:myuser/myproject.git /Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace" returned status code 128: stdout: Cloning into '/Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace'... stderr: Host key verification failed. fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:885) at hudson.plugins.git.GitAPI.access$000(GitAPI.java:40) at hudson.plugins.git.GitAPI$1.invoke(GitAPI.java:267) at hudson.plugins.git.GitAPI$1.invoke(GitAPI.java:246) at hudson.FilePath.act(FilePath.java:851) at hudson.FilePath.act(FilePath.java:824) at hudson.plugins.git.GitAPI.clone(GitAPI.java:246) ... 14 more Trying next repository ERROR: Could not clone repository FATAL: Could not clone hudson.plugins.git.GitException: Could not clone at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1048) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:978) at hudson.FilePath.act(FilePath.java:851) at hudson.FilePath.act(FilePath.java:824) at hudson.plugins.git.GitSCM.determineRevisionToBuild(GitSCM.java:978) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1134) at hudson.model.AbstractProject.checkout(AbstractProject.java:1325) at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:676) at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:581) at hudson.model.Run.execute(Run.java:1516) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:236) As you can see it fails when Hudson try to run the GIT command. The odd things is that if I try to run /usr/local/git/bin/git clone --progress -o origin [email protected]:myuser/myproject.git /Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace In my console, it works fine (after fixing a small problem relative the folder write permission with chmod) I found a post reporting a similar error which names a number of possible options but I'm not sure how to perform correctly these operations on my console. It looks like Jenkins is trying to run a command with a user which doesn't have permission to retrieve the appropriate keys from my .ssh directory.Not really sure.Maybe this output can help: MacMini:~ myuser$ ps axu | grep "/jenkins" myuser 11660 0.0 4.6 2918124 97096 ?? S 6:59pm 1:05.63 /usr/bin/java -jar /Users/myuser/Library/Caches/org.jenkins-ci.jenkins/jenkins.war jenkins 9896 0.0 9.0 2939824 188552 ?? Ss 4:06pm 17:55.91 /usr/bin/java -jar /Applications/Jenkins/jenkins.war myuser 11930 0.0 0.0 2432768 588 s000 S+ 10:28am 0:00.00 grep /jenkins MacMini:~ myuser$ ps axu | grep tomcat myuser 11932 0.0 0.0 2432768 588 s000 S+ 10:28am 0:00.00 grep tomcat MacMini:~ myuser$ I really hope to fix this problem, because I would like to write a very detailed tutorial with all the information I found disseminated around the web.

    Read the article

  • Extended URL php

    - by web.ask
    I have a problem with the extended url here in php. I'm not even sure if it's called extended url. I was wondering if someone can help me since i'm no good with PHP. I have a php page the careers.php it shows the career listing but when I click on Learn more link it goes to http(dot)localhost/houseofpatel/jobs/4/Career-4 but it outputs an "Object not found". Does xampplite have something to do with this? or is it just the codes? I also have the breadcrumb.php in here. Please see code below: session_start(); include("Breadcrumb.php"); $trail = new Breadcrumb(); $trail->add('Careers', $_SERVER['PHP_SELF'], 1); $trail -> output(); # REMOVE SPECIAL CHARACTERS $sPattern = array('/[^a-zA-Z0-9 -]/', '/[ -]+/', '/^-|-$/'); $sReplace = array('', '-', ''); mysql_query("SET CHARACTER_SET utf8"); //$query = "SELECT * FROM careers_mst WHERE status = 1 ORDER BY dateposted DESC LIMIT 0, 4"; if(!$_GET['jobs']){ $jobsPerPage = 4; }else{ $jobsPerPage = $_GET['jobs']; } if(!$_GET['page']){ $currPage = 1; $showMoreJobs = true; }else{ $currPage = $_GET['page']; $showMoreJobs = false; } $offset = ($currPage - 1 )* $jobsPerPage; $query = "SELECT * FROM careers_mst WHERE status = 1 ORDER BY dateposted"; $process = mysql_query($query); if(@mysql_num_rows($process) > 0){ $totalRows = @mysql_num_rows($process); $query2 = "SELECT * FROM careers_mst WHERE status = 1 ORDER BY dateposted DESC LIMIT $offset, $jobsPerPage"; $process2 = mysql_query($query2); if(@mysql_num_rows($process2) > 0){ $pagNav = pageNavigator($totalRows,$jobsPerPage,$currPage,''); while($row = @mysql_fetch_array($process2)){ $id = $row[0]; $title = stripslashes($row['title']); $title1 = preg_replace($sPattern, $sReplace, $title); $description = stripslashes($row['description']); $description1 = substr($description, 0, 200); $careers_list .= ' <div class="left" style="width:340px; padding-left:10px;"> <p><span class="blue"><strong>'.$title.'</strong></span></p> '.$description1.'... <p><a href="jobs/'.$id.'/'.$title1.'" title="Learn more">[ Learn More ]</a></p> </div>'; } /*$careers_list .= ' <div class="left" style="width: 100%; margin-top:15px;"> <p><a href="more-jobs" title="More Job Opennings">MORE JOB OPENNINGS >> </a></p> <div class="divider left"></div> </div>'; */ if($showMoreJobs){ if($totalRows > $jobsPerPage){ $currPage++; } $jobNav = "<a href=careers.php?page=$currPage title='More Job Opennings'>MORE JOB OPENNINGS >> </a>"; }else{ $jobNav = "$pagNav"; } $careers_list .= "<div class='left' style='width: 100%; margin-top:15px;'> <p>$jobNav</p> <div class='divider left'></div> </div>"; } }

    Read the article

  • cron.daily not running at the time it should?

    - by Mariano Martinez Peck
    My /etc/cron.daily scripts seem to be executing far later from what I understand they should. I am in Ubuntu and anacron is installed. If I do a sudo cat /var/log/syslog | grep cron I get something like: Aug 23 01:17:01 mymachine CRON[25171]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 02:17:01 mymachine CRON[25588]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 03:17:01 mymachine CRON[26026]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 03:25:01 mymachine CRON[30320]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )) Aug 23 04:17:01 mymachine CRON[26363]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 05:17:01 mymachine CRON[26770]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 06:17:01 mymachine CRON[27168]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 07:17:01 mymachine CRON[27547]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 07:30:01 mymachine CRON[2249]: (root) CMD (start -q anacron || :) Aug 23 07:30:02 mymachine anacron[2252]: Anacron 2.3 started on 2014-08-23 Aug 23 07:30:02 mymachine anacron[2252]: Will run job `cron.daily' in 5 min. Aug 23 07:30:02 mymachine anacron[2252]: Jobs will be executed sequentially Aug 23 07:35:02 mymachine anacron[2252]: Job `cron.daily' started As you can see, at 3:25 it tried to do something. But the cron.daily execution started really at 7:35. My /etc/crontab is: # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 3 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # From what I understand, daily scripts are indeed for 3:25. My /etc/anacrontab is: # /etc/anacrontab: configuration file for anacron # See anacron(8) and anacrontab(5) for details. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin HOME=/root LOGNAME=root # These replace cron's entries 1 5 cron.daily run-parts --report /etc/cron.daily 7 10 cron.weekly run-parts --report /etc/cron.weekly @monthly 15 cron.monthly run-parts --report /etc/cron.monthly So...does someone know why my cron started to do something at 3:25 but then really start the jobs at 7:35? Also..as you can see in the log, hourly jobs are being executed at correct time: hour and 17 minutes, which is exactly what I have in /etc/crontab Finally, from the logs, it seems my daily jobs are being actually run by anacron rather than cron? So cron finds nothing to run (at 3:25) and then anacron runs the jobs at 7:35? If true, how can I fix this? Thanks in advance,

    Read the article

  • Print directly to CUPS server from non-local clients (Ubuntu 14.04)

    - by OEP
    I set up a CUPS server with a few queues and printing from local clients (the CUPS test page and Samba) seems to work just fine. It seems like the CUPS server is denying non-local clients though: 130.127.48.70 - - [03/Jun/2014:14:29:19 -0400] "POST /printers/m137 HTTP/1.1" 200 390 Validate-Job successful-ok 130.127.48.70 - - [03/Jun/2014:14:29:19 -0400] "POST /printers/m137 HTTP/1.1" 200 339 Create-Job client-error-not-authorized localhost - - [03/Jun/2014:14:40:50 -0400] "POST /printers/m137 HTTP/1.1" 200 410869 Print-Job successful-ok This makes me think I have some sort of host-based restriction in my configuration file, but I can't find it. I've even set my default policy to Allow all only to get the same log message. I'm working from a configuration file which had previously worked on an older version of CUPS, which looks quite similar to the example cupsd.conf. I could be wrong but it looks like that final <Limit All> block ought to allow the actions the logs complain about. MaxLogSize 2000000000 # Log general information in error_log - change "info" to "debug" for # troubleshooting... LogLevel info #AccessLog syslog #ErrorLog syslog #PageLog syslog # Administrator user group... SystemGroup sys root lp # Only listen for connections from the local machine. Listen 0.0.0.0:631 Listen :::631 Listen /var/run/cups/cups.sock ServerName <snipped> # Show shared printers on the local network. Browsing Off BrowseOrder allow,deny # (Change '@LOCAL' to 'ALL' if using directed broadcasts from another subnet.) BrowseAllow @LOCAL # Default authentication type, when authentication is required... DefaultAuthType Basic # Restrict access to the server... <Location /> Order allow,deny Allow all </Location> # Restrict access to the admin pages... <Location /admin> AuthType Default Require user @SYSTEM Encryption Required Order allow,deny Allow all </Location> # Restrict access to configuration files... <Location /admin/conf> AuthType Default Require user @SYSTEM Encryption Required Order allow,deny Allow all </Location> # Set the default printer/job policies... <Policy default> # Job-related operations must be done by the owner or an administrator... <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job CUPS-Move-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order allow,deny </Limit> </Policy>

    Read the article

  • SQLAlchemy DetachedInstanceError with regular attribute (not a relation)

    - by haridsv
    I just started using SQLAlchemy and get a DetachedInstanceError and can't find much information on this anywhere. I am using the instance outside a session, so it is natural that SQLAlchemy is unable to load any relations if they are not already loaded, however, the attribute I am accessing is not a relation, in fact this object has no relations at all. I found solutions such as eager loading, but I can't apply to this because this is not a relation. I even tried "touching" this attribute before closing the session, but it still doesn't prevent the exception. What could be causing this exception for a non-relational property even after it has been successfully accessed once before? Any help in debugging this issue is appreciated. I will meanwhile try to get a reproducible stand-alone scenario and update here. Update: This is the actual exception message with a few stacks: File "/home/hari/bin/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6.egg/sqlalchemy/orm/attributes.py", line 159, in __get__ return self.impl.get(instance_state(instance), instance_dict(instance)) File "/home/hari/bin/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6.egg/sqlalchemy/orm/attributes.py", line 377, in get value = callable_(passive=passive) File "/home/hari/bin/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6.egg/sqlalchemy/orm/state.py", line 280, in __call__ self.manager.deferred_scalar_loader(self, toload) File "/home/hari/bin/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6.egg/sqlalchemy/orm/mapper.py", line 2323, in _load_scalar_attributes (state_str(state))) DetachedInstanceError: Instance <ReportingJob at 0xa41cd8c> is not bound to a Session; attribute refresh operation cannot proceed The partial model looks like this: metadata = MetaData() ModelBase = declarative_base(metadata=metadata) class ReportingJob(ModelBase): __tablename__ = 'reporting_job' job_id = Column(BigInteger, Sequence('job_id_sequence'), primary_key=True) client_id = Column(BigInteger, nullable=True) And the field client_id is what is causing this exception with a usage like the below: Query: jobs = session \ .query(ReportingJob) \ .filter(ReportingJob.job_id == job_id) \ .all() if jobs: # FIXME(Hari): Workaround for the attribute getting lazy-loaded. jobs[0].client_id return jobs[0] This is what triggers the exception later out of the session scope: msg = msg + ", client_id: %s" % job.client_id

    Read the article

  • PHP - Select from database the same query

    - by How to PHP
    I created a table that contains the name of the user and his job, and created PHP page that shows me all the users that works doctor, I entered doctor into a variable then I selected from the table where Jobs equal to $doctor, that is great, but I need it to get the same Jobs into a table in the page and the other same jobs into a table in the same page. this is my code that shows only the users works doctor in one table, <html> <h1>Doctors</h1> </html> <?php mysql_connect('localhost','root',''); mysql_select_db('data'); $doctor='doctor'; $query= mysql_query("SELECT * FROM `users` WHERE `job` = '$doctor'")or die(mysql_error()); while ($arr = mysql_fetch_array($query)) $name= $arr['name']; echo $name; } ?> That shows me doctors when I put doctor in a variable I want to show all same Jobs in a table. Is there is a way to do this? Thanks :)

    Read the article

  • When to drop an IT job

    - by Nippysaurus
    In my career I have had two programming jobs. Both these jobs were in a field that I am most familiar with (C# / MSSQL) but I have quit both jobs for the same reason: unmanageable code and bad (loose) company structure. There was something in common with both these jobs: small companies (in one I was the only developer). Currently I am in the following position: being given written instructions which are almost impossible to follow (somewhat of a fools errand). we are given short time constraints, but seldom asked how long work will take, and when we do it is always too long and needs to be shorter (and when it ends up taking longer than they need it to take, it's always our fault). there is no time for proper documenting, but we get blamed for not documenting (see previous point). Management is constantly screwing me around, saying I'm underperforming on a given task (which is not true, and switching me to a task which is much more confusing). So I must ask my fellow developers: how bad does a job need to be before you would consider jumping ship? And what to look out for when considering taking a job. In future I will be asking about documented procedures, release control, bug management and adoption of new technologies. EDIT: Let me add some more fuel to the fire ... I have been in my current job for just over a year, and the work I am doing almost never uses any of the knowledge I have gained from the other work I have been doing here. Everything is a giant learning curve. Because of this about 30% of my time is learning what is going on with this new product (who's owner / original developer has left the company), 30% trying to find the relevant documentation that helps the whole thing make sense, 30% actually finding where to make the change, 10% actually making the change.

    Read the article

  • API-based solutions for sending payments to people without bank accounts

    - by Tauren
    I'm looking for inexpensive ways to send payments to hundreds or thousands of individual contractors, even if they do not have a bank account. Currently I only need to support payment in the USA, but may eventually be international. Here's the scenario: I offer a service that allows an organization or manager-type person to coordinate contractors for very short term jobs. These jobs are typically only an hour or two in length. A contractor may get only one job over an entire month, several jobs spread out over a month, multiple jobs on a single day, or any other combination. Thus, a single contractor could earn as little as one job's payment up to potentially payment for dozens. Payment for a month could be as little as $10 up to $1000's. Right now, the system provides payroll reports to the manager and it is the manager's responsibility to produce checks, stuff envelopes, and send mail via the US postal service. I'd like to remove this burden from the manager and have all the payments taken care of for them automatically by the system. I'm not sure where to start or what the best options would be. I'm starting to look into the following solutions, but don't know specifics yet and would like some advice before pursuing them. I'd also like to hear about other ideas or suggestions. PayPal (Send Money, Adaptive Payments, x.com, other???) Amazon (Flexible Payments System?) Fund some sort of pre-paid debit card? Web service with API that mails checks for you? Direct deposit via a bank API (for users with bank accounts)? The problem is that many of these contractors may not be able to obtain bank accounts or credit cards within the USA. I don't mind doing a hybrid of solutions, but are there any that would work well with this issue? I want the solution to be easy to use for the contractors, meaning that they can get the money easily (via check in the mail, debit card ATM withdrawal, etc.)

    Read the article

  • Semantically correct nested anchors

    - by Blackie123
    I am working on a web applicaton. And what we also create is something that could be described as inline editing. Just to portray thing I am trying to solve I use example of Facebook post. You have post like. Steve Jobs added 5 new photos Steve Jobs is link that redirects to his profile so in HTML, that would be: <div class="post-block"> <p><a href="stevejobs/" title="#">Steve Jobs</a> added 5 new photos.<p> </div> But what I want is the whole post "block" to be clickable although I want only that name to appear to be link. Normally in HTML logic would say to to this: <a href="stevejobs/" title="#"><div class="post-block"> <p><a href="stevejobs/" title="#">Steve Jobs</a> added 5 new photos.<p> </div></a> But this isn't semantically correct. Quick look to HTML 4.01 or any other standard says something like this: Links and anchors defined by the A element must not be nested; an A element must not contain any other A elements. How to create it semantically correct other than using javascript and defining div:hover state for the whole "div anchor"?

    Read the article

  • Are Java developers really paid more than .Net developers? Also, should I just try and work with C?(

    - by Jon
    Ok, so I graduated about a year and a half ago. In college days I used to be really good (and keen) on C/C++. I've even done some multithreading and socket programming in C. Created networked GUI apps for linux using GTK etc.Now,I've been working since I graduated and well, let's say, I've been stereotyped with .net. That's all I have experience with in these years (Vb.net,asp.net,javascript). I'm pretty comfortable with all of the above and also SQL server.a)Cutting to the chase, I've begun to notice how .net jobs don't seem to pay all that well as Java. Jobs seeking similar number of yrs of experience offer almost twice for java developers. Does it mean I should consider switching?b)My heart still goes out for C. As a surrogate I've tried working with VC++ 6.0 (I know they aren't the same!). I've tried my hand with DDK,SDK,COM etc. Should I try finding jobs there or am I too inexperienced. Somehow, I can't find any openings for C. Where are they? Sometimes I get frustrated and feel that maybe I should just go for MS or something. Possibly that'll help me in landing with 'fatter paying jobs' in the field that I wish to work.

    Read the article

  • first major web app

    - by vbNewbie
    I have created a web app version of my previous crawler app and the initial form has controls to allow the client to make selections and start a search 'job'. These searches 'jobs' will be run my different threads created individually and added to a list to keep track of. Now the idea is to have another web form that will display this list of 'jobs' and their current status and will allow the jobs to be cancelled or removed only from the server side. This second form contains a grid to display these jobs. Now I have no idea if I should create the threads in the initial form code or send all user input to my main class which runs the search and if so how do I pass the the thread list to the second form to have it displayed on the grid. Any ideas really appreciated. Dim count As Integer = 0 Dim numThread As Integer = 0 Dim jobStartTime As Date Dim thread = New Thread(AddressOf ResetFormControlValues) 'StartBlogDiscovery) jobStartTime = Date.Now thread.Name = "Job" & jobStartTime 'clientName Session("Job") = "Job" & jobStartTime 'clientName thread.start() thread.sleep(50000) If numThread >= 10 Then For Each thread In threadlist thread.Join() Next Else numThread = numThread + 1 SyncLock threadlist threadlist.Enqueue(thread) End SyncLock End If this is the code that is called when the user clicks the search button on the inital form. this is what I just thought might work on the second web form if i used the session method. Try If Not Page.IsPostBack Then If Not Session("Job") = Nothing Then Grid1.DataSource = Session("Job") Grid1.DataBind() End If End If Finally

    Read the article

  • How can I get a list of modified records from a SQL Server database?

    - by Pixelfish
    I am currently in the process of revamping my company's management system to run a little more lean in terms of network traffic. Right now I'm trying to figure out an effective way to query only the records that have been modified (by any user) since the last time I asked. When the application starts it loads the job information and caches it locally like the following: SELECT * FROM jobs. I am writing out the date/time a record was modified ala UPDATE jobs SET Widgets=@Widgets, LastModified=GetDate() WHERE JobID=@JobID. When any user requests the list of jobs I query all records that have been modified since the last time I requested the list like the following: SELECT * FROM jobs WHERE LastModified>=@LastRequested and store the date/time of the request to pass in as @LastRequest when the user asks again. In theory this will return only the records that have been modified since the last request. The issue I'm running into is when the user's date/time is not quite in sync with the server's date/time and also of server load when querying an un-indexed date/time column. Is there a more effective system then querying date/time information?

    Read the article

  • Bacula & Multiple Tape Devices, and so on

    - by Tom O'Connor
    Bacula won't make use of 2 tape devices simultaneously. (Search for #-#-# for the TL;DR) A little background, perhaps. In the process of trying to get a decent working backup solution (backing up 20TB ain't cheap, or easy) at $dayjob, we bought a bunch of things to make it work. Firstly, there's a Spectra Logic T50e autochanger, 40 slots of LTO5 goodness, and that robot's got a pair of IBM HH5 Ultrium LTO5 drives, connected via FibreChannel Arbitrated Loop to our backup server. There's the backup server.. A Dell R715 with 2x 16 core AMD 62xx CPUs, and 32GB of RAM. Yummy. That server's got 2 Emulex FCe-12000E cards, and an Intel X520-SR dual port 10GE NIC. We were also sold Commvault Backup (non-NDMP). Here's where it gets really complicated. Spectra Logic and Commvault both sent respective engineers, who set up the library and the software. Commvault was running fine, in so far as the controller was working fine. The Dell server has Ubuntu 12.04 server, and runs the MediaAgent for CommVault, and mounts our BlueArc NAS as NFS to a few mountpoints, like /home, and some stuff in /mnt. When backing up from the NFS mountpoints, we were seeing ~= 290GB/hr throughput. That's CRAP, considering we've got 20-odd TB to get through, in a <48 hour backup window. The rated maximum on the BlueArc is 700MB/s (2460GB/hr), the rated maximum write speed on the tape devices is 140MB/s, per drive, so that's 492GB/hr (or double it, for the total throughput). So, the next step was to benchmark NFS performance with IOzone, and it turns out that we get epic write performance (across 20 threads), and it's like 1.5-2.5TB/hr write, but read performance is fecking hopeless. I couldn't ever get higher than 343GB/hr maximum. So let's assume that the 343GB/hr is a theoretical maximum for read performance on the NAS, then we should in theory be able to get that performance out of a) CommVault, and b) any other backup agent. Not the case. Commvault seems to only ever give me 200-250GB/hr throughput, and out of experimentation, I installed Bacula to see what the state of play there is. If, for example, Bacula gave consistently better performance and speeds than Commvault, then we'd be able to say "**$.$ Refunds Plz $.$**" #-#-# Alas, I found a different problem with Bacula. Commvault seems pretty happy to read from one part of the mountpoint with one thread, and stream that to a Tape device, whilst reading from some other directory with the other thread, and writing to the 2nd drive in the autochanger. I can't for the life of me get Bacula to mount and write to two tape drives simultaneously. Things I've tried: Setting Maximum Concurrent Jobs = 20 in the Director, File and Storage Daemons Setting Prefer Mounted Volumes = no in the Job Definition Setting multiple devices in the Autochanger resource. Documentation seems to be very single-drive centric, and we feel a little like we've strapped a rocket to a hamster, with this one. The majority of example Bacula configurations are for DDS4 drives, manual tape swapping, and FreeBSD or IRIX systems. I should probably add that I'm not too bothered if this isn't possible, but I'd be surprised. I basically want to use Bacula as proof to stick it to the software vendors that they're overpriced ;) I read somewhere that @KyleBrandt has done something similar with a modern Tape solution.. Configuration Files: *bacula-dir.conf* # # Default Bacula Director Configuration file Director { # define myself Name = backuphost-1-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 20 Password = "yourekiddingright" # Console password Messages = Daemon DirAddress = 0.0.0.0 #DirAddress = 127.0.0.1 } JobDefs { Name = "DefaultFileJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } JobDefs { Name = "DefaultTapeJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = "SpectraLogic" Messages = Standard Pool = AllTapes Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" Prefer Mounted Volumes = no } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultFileJob" } Job { Name = "BackupThisVolume" JobDefs = "DefaultTapeJob" FileSet = "SpecialVolume" } #Job { # Name = "BackupClient2" # Client = backuphost-12-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultFileJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=backuphost-1-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /srv/bacula/restore } FileSet { Name = "SpecialVolume" Include { Options { signature = MD5 } File = /mnt/SpecialVolume } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } File = /usr/sbin } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = backuphost-1-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "surelyyourejoking" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = backuphost-12-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "i'mnotjokinganddontcallmeshirley" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "lalalalala" Device = FileStorage Media Type = File } Storage { Name = "SpectraLogic" Address = localhost SDPort = 9103 Password = "linkedinmakethebestpasswords" Device = Drive-1 Device = Drive-2 Media Type = LTO5 Autochanger = yes } # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; DB Address = ""; dbuser = "bacula"; dbpassword = "bbmaster63" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } Pool { Name = AllTapes Pool Type = Backup Recycle = yes AutoPrune = yes # Prune expired volumes Volume Retention = 31 days # one Moth } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = backuphost-1-mon Password = "LastFMalsostorePasswordsLikeThis" CommandACL = status, .status } bacula-sd.conf # # Default Bacula Storage Daemon Configuration file # Storage { # definition of myself Name = backuphost-1-sd SDPort = 9103 # Director's port WorkingDirectory = "/var/lib/bacula" Pid Directory = "/var/run/bacula" Maximum Concurrent Jobs = 20 SDAddress = 0.0.0.0 # SDAddress = 127.0.0.1 } # # List Directors who are permitted to contact Storage daemon # Director { Name = backuphost-1-dir Password = "passwordslinplaintext" } # # Restricted Director, used by tray-monitor to get the # status of the storage daemon # Director { Name = backuphost-1-mon Password = "totalinsecurityabound" Monitor = yes } Device { Name = FileStorage Media Type = File Archive Device = /srv/bacula/archive LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; AlwaysOpen = no; } Autochanger { Name = SpectraLogic Device = Drive-1 Device = Drive-2 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg4 } Device { Name = Drive-1 Drive Index = 0 Archive Device = /dev/nst0 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } Device { Name = Drive-2 Drive Index = 1 Archive Device = /dev/nst1 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } # # Send all messages to the Director, # mount messages also are sent to the email address # Messages { Name = Standard director = backuphost-1-dir = all } bacula-fd.conf # # Default Bacula File Daemon Configuration file # # # List Directors who are permitted to contact this File daemon # Director { Name = backuphost-1-dir Password = "hahahahahaha" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = backuphost-1-mon Password = "hohohohohho" Monitor = yes } # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = backuphost-1-fd FDport = 9102 # where we listen for the director WorkingDirectory = /var/lib/bacula Pid Directory = /var/run/bacula Maximum Concurrent Jobs = 20 #FDAddress = 127.0.0.1 FDAddress = 0.0.0.0 } # Send all messages except skipped files back to Director Messages { Name = Standard director = backuphost-1-dir = all, !skipped, !restored }

    Read the article

  • Silverlight Recruiting Application Part 4 - Navigation and Modules

    After our brief intermission (and the craziness of Q1 2010 release week), we're back on track here and today we get to dive into how we are going to navigate through our applications as well as how to set up our modules. That way, as I start adding the functionality- adding Jobs and Applicants, Interview Scheduling, and finally a handy Dashboard- you'll see how everything is communicating back and forth. This is all leading up to an eventual webinar, in which I'll dive into this process and give a honest look at the current story for MVVM vs. Code-Behind applications. (For a look at the future with SL4 and a little thing called MEF, check out what Ross is doing over at his blog!) Preamble... Before getting into really talking about this app, I've done a little bit of work ahead of time to create a ton of files that I'll need. Since the webinar is going to cover the Dashboard, it's not here, but otherwise this is a look at what the project layout looks like (and remember, this is both projects since they share the .Web): So as you can see, from an architecture perspective, the code-behind app is much smaller and more streamlined- aka a better fit for the one man shop that is me. Each module in the MVVM app has the same setup, which is the Module class and corresponding Views and ViewModels. Since the code-behind app doesn't need a go-between project like Infrastructure, each MVVM module is instead replaced by a single Silverlight UserControl which will contain all the logic for each respective bit of functionality. My Very First Module Navigation is going to be key to my application, so I figured the first thing I would setup is my MenuModule. First step here is creating a Silverlight Class Library named MenuModule, creatingthe View and ViewModel folders, and adding the MenuModule.cs class to handle module loading. The most important thing here is that my MenuModule inherits from IModule, which runs an Initialize on each module as it is created that, in my case, adds the views to the correct regions. Here's the MenuModule.cs code: public class MenuModule : IModule { private readonly IRegionManager regionManager; private readonly IUnityContainer container; public MenuModule(IUnityContainer container, IRegionManager regionmanager) { this.container = container; this.regionManager = regionmanager; } public void Initialize() { var addMenuView = container.Resolve<MenuView>(); regionManager.Regions["MenuRegion"].Add(addMenuView); } } Pretty straightforward here... We inject a container and region manager from Prism/Unity, then upon initialization we grab the view (out of our Views folder) and add it to the region it needs to live in. Simple, right? When the MenuView is created, the only thing in the code-behind is a reference to the set the MenuViewModel as the DataContext. I'd like to achieve MVVM nirvana and have zero code-behind by placing the viewmodel in the XAML, but for the reasons listed further below I can't. Navigation - MVVM Since navigation isn't the biggest concern in putting this whole thing together, I'm using the Button control to handle different options for loading up views/modules. There is another reason for this- out of the box, Prism has command support for buttons, which is one less custom command I had to work up for the functionality I would need. This comes from the Microsoft.Practices.Composite.Presentation assembly and looks as follows when put in code: <Button x:Name="xGoToJobs" Style="{StaticResource menuStyle}" Content="Jobs" cal:Click.Command="{Binding GoModule}" cal:Click.CommandParameter="JobPostingsView" /> For quick reference, 'menuStyle' is just taking care of margins and spacing, otherwise it looks, feels, and functions like everyone's favorite Button. What MVVM's this up is that the Click.Command is tying to a DelegateCommand (also coming fromPrism) on the backend. This setup allows you to tie user interaction to a command you setup in your viewmodel, which replaces the standard event-based setup you'd see in the code-behind app. Due to databinding magic, it all just works. When we get looking at the DelegateCommand in code, it ends up like this: public class MenuViewModel : ViewModelBase { private readonly IRegionManager regionManager; public DelegateCommand<object> GoModule { get; set; } public MenuViewModel(IRegionManager regionmanager) { this.regionManager = regionmanager; this.GoModule = new DelegateCommand<object>(this.goToView); } public void goToView(object obj) { MakeMeActive(this.regionManager, "MainRegion", obj.ToString()); } } Another for reference, ViewModelBase takes care of iNotifyPropertyChanged and MakeMeActive, which switches views in the MainRegion based on the parameters. So our public DelegateCommand GoModule ties to our command on the view, that in turn calls goToView, and the parameter on the button is the name of the view (which we pass with obj.ToString()) to activate. And how do the views get the names I can pass as a string? When I called regionManager.Regions[regionname].Add(view), there is an overload that allows for .Add(view, "viewname"), with viewname being what I use to activate views. You'll see that in action next installment, just wanted to clarify how that works. With this setup, I create two more buttons in my MenuView and the MenuModule is good to go. Last step is to make sure my MenuModule loads in my Bootstrapper: protected override IModuleCatalog GetModuleCatalog() { ModuleCatalog catalog = new ModuleCatalog(); // add modules here catalog.AddModule(typeof(MenuModule.MenuModule)); return catalog; } Clean, simple, MVVM-delicious. Navigation - Code-Behind Keeping with the history of significantly shorter code-behind sections of this series, Navigation will be no different. I promise. As I explained in a prior post, due to the one-project setup I don't have to worry about the same concerns so my menu is part of MainPage.xaml. So I can cheese-it a bit, though, since I've already got three buttons all set I'm just copying that code and adding three click-events instead of the command/commandparameter setup: <!-- Menu Region --> <StackPanel Grid.Row="1" Orientation="Vertical"> <Button x:Name="xJobsButton" Content="Jobs" Style="{StaticResource menuStyleCB}" Click="xJobsButton_Click" /> <Button x:Name="xApplicantsButton" Content="Applicants" Style="{StaticResource menuStyleCB}" Click="xApplicantsButton_Click" /> <Button x:Name="xSchedulingModule" Content="Scheduling" Style="{StaticResource menuStyleCB}" Click="xSchedulingModule_Click" /> </StackPanel> Simple, easy to use events, and no extra assemblies required! Since the code for loading each view will be similar, we'll focus on JobsView for now.The code-behind with this setup looks something like... private JobsView _jobsView; public MainPage() { InitializeComponent(); } private void xJobsButton_Click(object sender, RoutedEventArgs e) { if (MainRegion.Content.GetType() != typeof(JobsView)) { if (_jobsView == null) _jobsView = new JobsView(); MainRegion.Content = _jobsView; } } What am I doing here? First, for each 'view' I create a private reference which MainPage will hold on to. This allows for a little bit of state-maintenance when switching views. When a button is clicked, first we make sure the 'view' typeisn't active (why load it again if it is already at center stage?), then we check if the view has been created and create if necessary, then load it up. Three steps to switching views and is easy as pie. Part 4 Results The end result of all this is that I now have a menu module (MVVM) and a menu section (code-behind) that load their respective views. Since I'm using the same exact XAML (except with commands/events depending on the project), the end result for both is again exactly the same and I'll show a slightly larger image to show it off: Next time, we add the Jobs Module and wire up RadGridView and a separate edit page to handle adding and editing new jobs. That's when things get fun. And somewhere down the line, I'll make the menu look slicker. :) Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Azure Task Scheduling Options

    - by charlie.mott
    Currently, the Azure PaaS does not offer a distributed\resilient task scheduling service.  If you do want to host a task scheduling product\solution off-premise (and ideally use Azure), what are your options? PaaS Option 1: Worker Roles Use a worker role to schedule and execute actions at specific time periods.  There are a few frameworks available to assist with this: http://azuretoolkit.codeplex.com https://github.com/Lokad/lokad-cloud/wiki/TaskScheduler http://blog.smarx.com/posts/building-a-task-scheduler-in-windows-azure - This addresses a slightly different set of requirements. It’s a more dynamic approach for queuing up tasks, but not repeatable tasks (e.g. daily). I found the Azure Toolkit option the most simple to implement.  Step 1 : Create a domain entity implementing IJob for each job to schedule.  In this sample, I asynchronously call a WCF service method. 1: namespace Acme.WorkerRole.Jobs 2: { 3: using AzureToolkit; 4: using ScheduledTasksService; 5: 6: public class UploadEmployeesJob : IJob 7: { 8: public void Run() 9: { 10: // Call Tasks Service 11: var client = new ScheduledTasksServiceClient("BasicHttpBinding_IScheduledTasksService"); 12: client.UploadEmployees(); 13: client.Close(); 14: } 15: } 16: } Step 2 : In the worker role run method, add the jobs to the toolkit engine. 1: namespace Acme.WorkerRole 2: { 3: using AzureToolkit.Engine; 4: using Jobs; 5:   6: public class WorkerRole : WorkerRoleEntryPoint 7: { 8: public override void Run() 9: { 10: var engine = new CloudEngine(); 11:   12: // Add Scheduled Jobs (using CronJob syntax - see http://www.adminschoice.com/crontab-quick-reference). 13:   14: // 1. Upload Employee job - 8.00 PM every weekday (Mon-Fri) 15: engine.WithJobScheduler().ScheduleJob<UploadEmployeesJob>(c => { c.CronSchedule = "0 20 * * 1-5"; }); 16: // 2. Purge Data job - 10 AM every Saturday 17: engine.WithJobScheduler().ScheduleJob<PurgeDataJob>(c => { c.CronSchedule = "0 10 * * 6"; }); 18: // 3. Process Exceptions job - Every 5 minutes 19: engine.WithJobScheduler().ScheduleJob<ProcessExceptionsJob>(c => { c.CronSchedule = "*/5 * * * *"; }); 20:   21: engine.Run(); 22: base.Run(); 23: } 24: } 25: } Pros Cons Azure Toolkit option is simple to implement. For the AzureToolkit option, you are limited to a single worker role.  Otherwise, the jobs will be executed multiple times, once for each worker role instance.   Paying for a continuously running worker role, even if it just processes a single job once a week.  If you only have a few scheduled tasks to run calling asynchronous services hosted in different web roles, an extra small worker role likely to be sufficient.  However, for an extra small worker role this still costs $14.40/month (03/09/2012). Option 2: Use Scheduled Task on Azure Web Role calling a console app Setup a Windows Scheduled Task on the Azure Web Role. This calls a console application that calls the WCF service methods that run the task actions. This design is described here: http://www.ronaldwidha.net/2011/02/23/cron-job-on-azure-using-scheduled-task-on-a-web-role-to-replace-azure-worker-role-for-background-job/ http://www.voiceoftech.com/swhitley/index.php/2011/07/windows-azure-task-scheduler/ http://devlicio.us/blogs/vinull/archive/2011/10/23/moving-to-azure-worker-roles-for-nothing-and-tasks-for-free.aspx Pros Cons Fairly easy to implement. Supportability - I RDC’ed onto the Azure server and stopped the scheduled task. I then rebooted the machine and the task was re-started. I also tried deleting the task and rebooting, the same thing occurred. The only way to permanently guarantee that a task is disabled is to do a fresh deployment. I think this is a major supportability concern.   Saleability - multiple instances would trigger multiple tasks. You can only have one instance for the scheduled task web role. The guidance implements setup of the scheduled task as part of a web role instance. But if you have more than one instance in a web role, the task will be triggered multiple times for each scheduled action (once per machine). Workaround: If we wanted to use scheduled tasks for another client with a saleable WCF service, then we could include the console & tasks scripts in a separate web role (e.g. a empty WCF service with no real purpose to it). SaaS Option 3: Azure Marketplace I thought that someone might be offering this type of service via the Azure marketplace. At the point of writing this blog post, I did not find anyone doing so. https://datamarket.azure.com/ Pros Cons   Nobody currently offers this on the Azure Marketplace. Option 4: Online Job Scheduling Service Provider There are plenty of online providers that offer this type of service on a pay-as-you-go approach.  Some of these are free for small usage.   Many of these providers are listed here: http://en.wikipedia.org/wiki/Webcron Pros Cons No bespoke development for scheduler. Reliance on third party. IaaS Option 5: Setup Scheduling Software on Azure IaaS VM’s One of job scheduling software offerings could be installed and configured on Azure VM’s.  A list of software options is listed here: http://en.wikipedia.org/wiki/List_of_job_scheduler_software Pros Cons Enterprise distributed\resilient task scheduling service VM Setup and maintenance   Software Licence Costs Option 6: VM Gallery A the time of writing this blog post, I did not spot a VM in the gallery that included pre-installation of any of the above software options. Pros Cons   No current VM template. Summary For my current project that had a small handful of tasks to schedule with a limited project budget I chose option 1 (a worker role using the Azure Toolkit to schedule tasks).  If I was building an enterprise scale solution for the future, options 4 and 5 are currently worthy of consideration. Hopefully, Microsoft will include tasks scheduling in the future as part of their PaaS offerings.

    Read the article

  • Hard to append a table with many records into another without generating duplicates

    - by Bill Mudry
    I may seem to be a bit wordy at first but for the hope it will be easier for all of you to understand what I am doing in the first place. I have an uncommon but enjoyable activity of collecting as many species of wood from around the world as I can (over 2,900 so far). Ok, that is the real world. Meanwhile I have spent over 8 years compiling over 5.8 meg of text data on all the woods of the world. That got so large that learning some basic PHP and MySQL was most welcome so I could build a new database driven home for all this research. I am still slow at it but getting there. The original premise was to find evidence of as many species of woods in the world I can. The more names identified, the more successful the project. I have named the project TAXA for ease of conversation (short for Taxonomy). You are most welcome to take a look at what I have so far at www.prowebcanada.com/taxa. It is 95% dynamically driven. So far I am reporting about 6,500 botanical wood names and, as said above, the more I can report, the more successful is the project. I have a file of all the woods in the second largest wood collection in the world, the Tervuren wood collection in the Netherlands with over 11,300 wood names even after cleaning out all duplicates. That is almost twice the number I am reporting now so porting all the new wood names from Tervuren to the 'species' table where I keep the reported data would be a major desirable advancement in the project. At one point I was able to add all the Tervuren records to the species table but over 3,000 duplicates also formed. They were not in the Tervuren file in the first place but represent the same wood names common to both files. It is common sense that there would be woods common to both that when merged would create new duplicates. At one point and with the help of others from another forum, I may very well have finally got the proper SQL statement. When I ran it, though, the system said (semi-amusingly at first) ----- that it had gone away! After looking up on the Net what could have have done this, one reason is that the MySQL timeout lapses and probably because of the large size of files I am running. I am running this on a rented account on Godaddy so I cannot go about trying to adjust any config file. For safety, I copied the tervuren.sql file as tervuren_target.sql and the species.sql file as species_master.sql tp use as working files just to make sure I protect the original files from destruction or damage. Later I can name the species_master back to just species.sql once I am happy all worked well. The species file has about 18 columns in it but only 5 columns match the columns in the Tervuren file (name for name and collation also). The rest of the columns are just along for the ride, so to speak. The common key in both is the 'species_name" columns in both. I am not sure it is at all proper to call one a primary key and the other a foreign key since there really is no relational connection to them. One is just more data for the other and can disappear after, never to be referred to the working code in the application. I have been very surprised and flabbergasted on how hard it can be to append records from one large table into another (with same column names plus others) without generating NEW duplicates in the first place. Watch out thinking that a SELECT DISTINCT statement may do the job because absolutely NO records in the species table must get destroyed in the process and there is no way (well, that I know of) to tell the 'DISTINCT" command this. Yes, the original 'species' table has duplicates in it even before all this but, trust me ---- they have to be removed the long hard way manually record by record or I will lose precious information. It is more important to just make sure no NEW duplicates form through bringing in new names in the tervuren_target.species_name into species.species_name. I am hoping and thinking that a straight SQL solution should work --- except for that nasty timeout. How do I get past that? Could it mean that I may have to turn to a PHP plus SQL method?? Or ..... would I have to break up the Tervuren files into a few smaller ones and run them independently (hope not....)" So far, what seems should be easy has proven to be unexpectedly tricky. I appreciate any help you can give but start from the assumption that this may be harder to do right than it may seem on the surface. By the way --- I am running a quad 64 bit system with Windows 7, so at least I have some fairly hefty power on the client end. I have a direct ethernet cable feeding a cable connection to the Internet. Once I get an algorithm and code working for this, I also have many other lists to process that could make the 'species' table grow even more. It could be equivalent to (ahem) lighting a rocket under my project (especially compared to do this record by record manually)! This is my first time in this forum, so I do not know how I can receive any replies. Do I have to to come back here periodically or are replies emailed out also? It would be great if you CC'd copies to me at billmudry at rogers.com :-) Much thanks for your patience and help, Bill Mudry Mississauga, Ontario Canada (next to Toronto).

    Read the article

  • Transaction issue in java with hibernate - latest entries not pulled from database

    - by Gearóid
    Hi, I'm having what seems to be a transactional issue in my application. I'm using Java 1.6 and Hibernate 3.2.5. My application runs a monthly process where it creates billing entries for a every user in the database based on their monthly activity. These billing entries are then used to create Monthly Bill object. The process is: Get users who have activity in the past month Create the relevant billing entries for each user Get the set of billing entries that we've just created Create a Monthly Bill based on these entries Everything works fine until Step 3 above. The Billing Entries are correctly created (I can see them in the database if I add a breakpoint after the Billing Entry creation method), but they are not pulled out of the database. As a result, an incorrect Monthly Bill is generated. If I run the code again (without clearing out the database), new Billing Entries are created and Step 3 pulls out the entries created in the first run (but not the second run). This, to me, is very confusing. My code looks like the following: for (User user : usersWithActivities) { createBillingEntriesForUser(user.getId()); userBillingEntries = getLastMonthsBillingEntriesForUser(user.getId()); createXMLBillForUser(user.getId(), userBillingEntries); } The methods called look like the following: @Transactional public void createBillingEntriesForUser(Long id) { UserManager userManager = ManagerFactory.getUserManager(); User user = userManager.getUser(id); List<AccountEvent> events = getLastMonthsAccountEventsForUser(id); BillingEntry entry = new BillingEntry(); if (null != events) { for (AccountEvent event : events) { if (event.getEventType().equals(EventType.ENABLE)) { Calendar cal = Calendar.getInstance(); Date eventDate = event.getTimestamp(); cal.setTime(eventDate); double startDate = cal.get(Calendar.DATE); double numOfDaysInMonth = cal.getActualMaximum(Calendar.DAY_OF_MONTH); double numberOfDaysInUse = numOfDaysInMonth - startDate; double fractionToCharge = numberOfDaysInUse/numOfDaysInMonth; BigDecimal amount = BigDecimal.valueOf(fractionToCharge * Prices.MONTHLY_COST); amount.scale(); entry.setAmount(amount); entry.setUser(user); entry.setTimestamp(eventDate); userManager.saveOrUpdate(entry); } } } } @Transactional public Collection<BillingEntry> getLastMonthsBillingEntriesForUser(Long id) { if (log.isDebugEnabled()) log.debug("Getting all the billing entries for last month for user with ID " + id); //String queryString = "select billingEntry from BillingEntry as billingEntry where billingEntry>=:firstOfLastMonth and billingEntry.timestamp<:firstOfCurrentMonth and billingEntry.user=:user"; String queryString = "select be from BillingEntry as be join be.user as user where user.id=:id and be.timestamp>=:firstOfLastMonth and be.timestamp<:firstOfCurrentMonth"; //This parameter will be the start of the last month ie. start of billing cycle SearchParameter firstOfLastMonth = new SearchParameter(); firstOfLastMonth.setTemporalType(TemporalType.DATE); //this parameter holds the start of the CURRENT month - ie. end of billing cycle SearchParameter firstOfCurrentMonth = new SearchParameter(); firstOfCurrentMonth.setTemporalType(TemporalType.DATE); Query query = super.entityManager.createQuery(queryString); query.setParameter("firstOfCurrentMonth", getFirstOfCurrentMonth()); query.setParameter("firstOfLastMonth", getFirstOfLastMonth()); query.setParameter("id", id); List<BillingEntry> entries = query.getResultList(); return entries; } public MonthlyBill createXMLBillForUser(Long id, Collection<BillingEntry> billingEntries) { BillingHistoryManager manager = ManagerFactory.getBillingHistoryManager(); UserManager userManager = ManagerFactory.getUserManager(); MonthlyBill mb = new MonthlyBill(); User user = userManager.getUser(id); mb.setUser(user); mb.setTimestamp(new Date()); Set<BillingEntry> entries = new HashSet<BillingEntry>(); entries.addAll(billingEntries); String xml = createXmlForMonthlyBill(user, entries); mb.setXmlBill(xml); mb.setBillingEntries(entries); MonthlyBill bill = (MonthlyBill) manager.saveOrUpdate(mb); return bill; } Help with this issue would be greatly appreciated as its been wracking my brain for weeks now! Thanks in advance, Gearoid.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >