Search Results

Search found 36822 results on 1473 pages for 'document information'.

Page 40/1473 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Generating Report for NUnit

    - by thangchung
     All source codes for this post can be found at my github.Time ago, I received a request that people ask me how they can generate reports of the results of testing using NUnit? In fact, I may never do this. In the little world of my programming, I only care about the test results, red-green-refactoring, and that was it. When I got that question quite a bit unexpected, I knew that I could use NCover to generate reports, but reports of NCover too simple, it did not give us more details on the number of test cases, test methods, ... And I began to see about creating interesting report for NUnit.I was lucky to find an open source here. Its authors call it NUnit2Report, but one disadvantage is it only running on .NET 1.0. Indeed too old compared to the current version 4.0. And I try to download the preview, but I could not run. I had to open its source code and found that it uses XSLT to convert the output of NUnit results from XML to HTML. Nothing really special, because I also knew that after NUnit run output file extension is XML is created. Author only use this file to convert to HTML using XSLT. And I decided to convert it to. NET 4.0, because I will not have to code from scratch. Conversion work made me take some time, but was lucky that I finally have what I want. Thanks Gilles for the this OSS. I will send a mail to thank him for his efforts but put this out for the OSS. Now I will show people how to do it. I used the auto built NAnt and NUnit for running TestCase, and I use Selenium testing framework. After writing three TestCase using Selenium, I ran NUnit, and got the following results: There are 1 fail and 2s success. In the bin directory of this project will have the NUnit output file as shown below: Then I create a build file, and a bat file for easy running (can use PowerShell is here also.) Double click in the bat file to create a report like this:       Finally open the index.html file in the folder to view report. As everyone can see, it is the TestCase and divide very clearly, that I meet the requirements. This is really good. Once again I really thank NUnit2Report from Gilles. People can contact him via the mail address [email protected] or website  http://nunit2report.sourceforge.net. It really is useful to those who promised to QA. Hopefully this post will help anyone really interested in doing reports for NUnit.   

    Read the article

  • Mass Metadata Updates with Folders

    - by Kyle Hatlestad
    With the release of WebCenter Content PS5, a new folder architecture called 'Framework Folders' was introduced.  This is meant to replace the folder architecture of 'Folders_g'.  While the concepts of a folder structure and access to those folders through Desktop Integration Suite remain the same, the underlying architecture of the component has been completely rewritten.  One of the main goals of the new folders is to scale better at large volumes and remove the limitations of 1000 content items or sub-folders within a folder.  Along with the new architecture, it has a new look and a few additional features have been added.  One of those features are Query Folders.  These are folders that are populated simply by a query rather then literally putting items within the folders.  This is something that the Library has provided, but it always took an administrator to define them through the Web Layout Editor.  Now users can quickly define query folders anywhere within the standard folder hierarchy.   Within this new Framework Folders is the very handy ability to do metadata updates.  It's similar to the Propagate feature in Folders_g, but there are some key differences that make this very flexible and much more powerful. It's used within regular folders and Query Folders.  So the content you're updating doesn't all have to be in the same folder...or a folder at all.   The user decides what metadata to propagate.  In Folders_g, the system administrator controls which fields will be propagated using a single administration page.  In Framework Folders, the user decides at that time which fields they want to update. You set the value you want on the propagation screen.  In Folders_g, it used the metadata defined on the parent folder to propagate.  With Framework Folders, you supply the new metadata value when you select the fields you want to update.  It does not have to be defined on the parent folder. Because of these differences, I think the new propagate method is much more useful.  Instead of always having to rely on Archiver or a custom spreadsheet, you can quickly do mass metadata updates right within folders.   Here are the basic steps to perform propagation. First create a folder for the propagation.  You can use a regular folder, but a Query Folder will work as well. Go into the folder to get the results.   In the Edit menu, select 'Propagate'. Select the check-box next to the field to update and enter the new value  Click the Propagate button. Once complete, a dialog will appear showing it is complete What's also nice is that the process happens asynchronously in the background which means you can browse to other pages and do other things while it is still working.  You aren't stuck on the page waiting for it to complete.  In addition, you can add a configuration flag to the server to turn on a status indicator icon.  Set 'FldEnableInProcessIndicator=1' and it will show a working icon as its doing the propagation. There is a caveat when using the propagation on a Query Folder.   While a propagation on a regular folder will update all of the items within that folder, a Query Folder propagation will only update the first 50 items.  So you may need to run it multiple times depending on the size...and have the query exclude the items as they get updated. One extra note...Framework Folders is offered as the default folder architecture in the PS5 release of WebCenter Content.  But if you are using WebCenter Content integrated with another product that makes use of folders (WebCenter Portal/Spaces, Fusion Applications, Primavera, etc), you'll need to continue using Folders_g until they are updated to use the new folders.

    Read the article

  • Framework 4 Features: Login Id Support

    - by Anthony Shorten
    Given that Oracle Utilities Application Framework 4 is available as part of Mobile Work Force Management and other product progressively I am preparing a number of short but sweet blog entries highlighting some of the new functionality that has been implemented. This is the first entry and it is on a new security feature called Login Id. In past releases of the Oracle Utilities Application Framework, the userid used for authentication and authorization was limited to eight (8) characters in length. This mirrored what the market required in the past with LAN userids and even legacy userids being that length. The technology market has since progressed to longer userid lengths. It is very common to hear that email addresses are being used as credentials for production systems. To achieve this in past versions of the Oracle Utilities Application Framework, sites had to introduce a short userid (8 characters in length) as an alias in your preferred security store. You then configured your J2EE Web Application Server to use the alias as credentials. This sometimes was a standard feaure of the security store and/or the J2EE Web Application Server, if you were lucky. If not, some java code has to be written to implement the solution. In Oracle Utilities Application Framework 4 we introduced a new attribute on the user object called Login Id. The Login Id can be up to 256 characters in length and is an alternative to the existing userid stored on the user object. This means the Oracle Utilities Application Framework can support both long and short userids. For backward compatibility we use the Login Id for authentication but the short userid for authorization and auditing. The user object within the Oracle Utilities Application Framework holds the translation. Backward compatibility is always a consideration in any of our designs for future or changed functionality. You will see reference to this fact in the blog entries I will be composing over the next few months. We have also thought about the flexibility in implementing this feature. The Login Id can be the same value of the Userid (the default for backward compatibility) or can be different. Both the Login Id and Userid have to be unique. This avoids sharing of credentials and is also backward compatible. You can manually enter the Login Id or provision it from Oracle Identity Manager (or other tool). If you use the Login Id only, then we will not autogenerate a short userid automatically as the rules for this can vary from site to site. You have a number of options there. Most Identity provisioning tools can generate a short userid at user creation time and this can be used. If you do not use provisioning tools, then you can write a class extension using the SDK to autoegenerate the userid based upon your sites preference. When we designed the feature there were lots of styles of generating userids (random, initial and surname, numbers etc). We could not really see a clear winner in that respect so we just allowed the extension to be inserted in if necessary. Most customers indicated to us that identity provisioning was the preferred way. This is why we released an Oracle Identity Manager integration with the framework. The Login id is case sensitive now which was not supported under userid. The introduction of the Login Id allows the product to offer flexible options when configuring security whilst maintaining backward compatibility.

    Read the article

  • Internal Data Masking

    - by ACShorten
    By default, the data in the product is unmasked for authorized users. If particular data within the object is considered a candidate for data masking then the masking capabilities with the product can be used to mask the data in an appropriate fashion. The inbuilt Data Masking capabilities of the Oracle Utilities Application Framework uses a number of configuration elements: An algorithm, of type F1-MASK, is specified to configure the elements of the data masking including the masking character, number of suffix characters left unmasked, characters to ignore in the string, the application service, security type and authorization levels applicable to the mask. A Data Masking Feature Configuration is created to define where the algorithm applies. The specification of the feature allows you to define the fields to encrypt using the configured algorithm. The algorithm can be attached to a schema field, table field, characteristic, search field and even a child record (such as an identifier). The appropriate user groups are then connected to the application services with the appropriate service types and level to indicate whether the masking applies to the user group or not. For example, say there is a field called CCNBR in the product which holds the credit card details. I would create an algorithm, say CCformatCC, to mask the credit card number with the last few digits as unmasked (as the standard in most systems dictate). I would specify on the Field Mask the following: field="CCNBR", alg="CMformatCC" On the algorithm CMfomatCC, I would specify the mask, application service, security type and the authorization level which users would see the credit card unmasked. To finish the configuration off and to implemention I would connect the appropriate user groups to the application service I specified with the security type and appropriate authorization level for that group. Whenever a user accesses the CCNBR field on any of the maintenance screens, searches and other screens that use the CCNBR meta data definition would then be masked according to the user group that the user was a member of. Refer to the documentation supplied with F1-MASK algorithm type entry for more examples of what is possible.

    Read the article

  • How do search engines handle hyphenated words?

    - by NinjaKC
    I am not sure my title fully explains what I mean. I thought this might be an interesting question. If I had a set of keywords, broken with a dash or 2, will search engines consider the dashed split keyword as maybe a full keyword? Say I have a site that sort of breaks words down, like the dictionary sites do. So a keyword for that page, might end up in the page, and / or the URL, as broken by dashes. Key-word = keyword Co-op-er-at-ive = cooperative Pho-to-gra-phy = Photography www.example.com/key-word/ www.example.com/co-op-er-at-ive/ www.example.com/pho-to-gra-phy/ I know search engines will consider a dash (at least Google) as a space, and understand it as multiple words. But in the English language, a dash can also break a word down (at least I think it can, can't it?), so will search engines also take this into consideration? I did a 'little' research, I Googled some words and placed random dashes, and it returned the words I searched for, but this could be considered a typo from the user on Google's search end, so really I am wondering if I can purposely put a dash in a keyword, and have the search engine spiders still catch that keyword as the real word without dashes? I've done a little Googling and looking here on Stackoverflow, but everything comes down to dashes for multiple words, not really the specific thing I'm trying to figure out. Hopefully that makes sense, I am not an expert in SEO, yet, but get the basics and have been playing, and this is just really a random question to satisfy my knowledge of playing :P

    Read the article

  • Limitations of User-Defined Customer Events (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what are limitations in having user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events, you should be aware that you don't extend or simulate base customer events, the ones that are included in the base product, as they are further used to provide/validate additional business functions.  

    Read the article

  • Business Strategy - Google Case Study

    Business strategy defined by SMBTN.com is a term used in business planning that implies a careful selection and application of resources to obtain a competitive advantage in anticipation of future events or trends. In more general terms business strategy is positioning a company so that it has the greatest competitive advantage over others in the markets and industries that they participate in. This process involves making corporate decisions regarding which markets to provide goods and services, pricing, acceptable quality levels, and how to interact with others in the marketplace. The primary objective of business strategy is to create and increase value for all of its shareholders and stakeholders through the creation of customer value. According to InformationWeek.com, Google has a distinctive technology advantage over its competitors like Microsoft, eBay, Amazon, Yahoo. Google utilizes custom high-performance systems which are cost efficient because they can scale to extreme workloads. This hardware allows for a huge cost advantage over its competitors. In addition, InformationWeek.com interviewed Stephen Arnold who stated that Google’s programmers are 50%-100% more productive compared to programmers working for their competitors.  He based this theory on Google’s competitors having to spend up to four times as much just to keep up. In addition to Google’s technological advantage, they also have developed a decentralized management schema where employees report directly to multiple managers and team project leaders. This allows for the responsibility of the technology department to be shared amongst multiple senior level engineers and removes the need for a singular department head to oversee the activities of the department.  This is a unique approach from the standard management style. Typically a department head like a CIO or CTO would oversee the department’s global initiatives and business functionality.  This would then be passed down and administered through middle management and implemented by programmers, business analyst, network administrators and Database administrators. It goes without saying that an IT professional’s responsibilities would be directed by Google’s technological advantage and management strategy.  Simply because they work within the department, and would have to design, develop, and support the high-performance systems and would have to report multiple managers and project leaders on a regular basis. Since Google was established and driven by new and immerging technology, all other departments would be directly impacted by the technology department.  In fact, they would have to cater to the technology department since it is a huge driving for in the success of Google. Reference: http://www.smbtn.com/smallbusinessdictionary/#b http://www.informationweek.com/news/software/linux/showArticle.jhtml?articleID=192300292&pgno=1&queryText=&isPrev=

    Read the article

  • OUAB Europe Globalization Topics

    - by ultan o'broin
    Pleased to announce that the Oracle Usability Advisory Board has added a globalization workgroup for 2011. This will be headed up my myself. The aims of this workgroup are: To understand how our customers use translated versions of applications To identify key international support, translation and localization-related usability issues in deployed applications To make recommendations to Oracle usability and development teams about meeting global customer usability requirements in current and future versions of our applications. Issues include: How international users use applications when working, ethnography opportunities, key cultural impacts on usability; multilingual feature usage, localization of forms and reports, language quality, extensibility, translation of user assistance, user-generated and rich-media content like UPK, and international mobile application opportunities. More details will be available on the usableapps.oracle.com website shortly.

    Read the article

  • Introducing the Documentation Workflows

    - by Owen Allen
    The how-to documents  provide end to end examples of specific features, such as creating a new zone or discovering a new system. We are enhancing the individual how-tos with documents called Workflows. These workflows are each built around procedural flowcharts that show these larger and more complex tasks. The workflow indicates which how-tos or other workflows you should follow to complete a more complex process, and give you a flow for planning the execution of a process. Over the coming days I'll highlight each of these workflows, and talk about the tasks that each one guides you through.

    Read the article

  • OVM Server for SPARC Enhancements

    - by Owen Allen
    Oracle VM Servers for SPARC saw a few improvements in Ops Center 12.2. In addition to brownfield support, we've made a number of enhancements to let you add OVM Servers for SPARC to a Server Pool and enable migration of their guests. -When you discover an OVMSS Control Domain and manage it with an Ops Center Agent, its guests are automatically discovered as well. The guest metadata is initially put in the local metadata library in the /guests directory, but you can move it from one library to another to enable migration.-Once you've discovered an OVMSS control domain, you can add it to a server pool, even if it's already configured and running logical domains. Even if live migration between OVMSS systems isn't possible due to CPU incompatibilities, you can still put them in a server pool together and enable guest recovery by configuring the CPU architecture of the guest domain as generic. -You can mark a guest's storage as shared to indicate that it's available to other managed OVM Server systems with the same back-end name. This lets you use storage not fully managed in an Ops Center library as part of guest migration. Put together, these enhancements make it much easier to manage and maintain OVMSS guests.

    Read the article

  • What does it mean to treat data as an asset?

    What does it mean to treat data as an asset? When considering this concept, we must define what data is and how it can be considered an asset. Data can easily be defined as a collection of stored truths that are open to interpretation and manipulation.  Expanding on this definition, data can be viewed as a set of captured facts, measurements, and ideas used to make decisions. Furthermore, InvestorsWords.com defines asset as any item of economic value owned by an individual or corporation. Now let’s apply this definition of asset to our definition of data, and ask the following question. Can facts, measurements and ideas be items that are of economic value owned by an individual or corporation? The obvious answer is yes; data can be bought and sold like commodities or analyzed to make smarter business decisions.  We can look at the economic value of data in one of two ways. First, data can be sold as a commodity that can take the form of goods like eBooks, Training, Music, Movies, and so on. Customers are willing to pay to gain access to this data for their consumption. This directly implies that there is an economic value for data in the form of a commodity because customers see a value in obtaining it.  Secondly data can be used in making smarter business decisions that allow for companies to become more profitable and/or reduce their potential for risk in regards to how they operate.  In the past I have worked at companies where we had to analyze previous sales activities in conjunction with current activities to determine how the company was preforming for the quarter.  In addition trends can be formulated based on existing data that allow companies to forecast data so that they can make strategic business decisions based sound forecasted data. Companies that truly value their data are constantly trying to grow and upgrade their data and supporting applications because it is the life blood of a company. If we look at an eBook retailer for example, imagine if they lost all of their data. They would be in essence forced out of business because they would have nothing to sell. In turn, if we look at a company that was using data to facilitate better decision making processes and they lost all of their data then they could be losing potential revenue and/ or increasing the company’s losses by making important business decisions virtually in the dark compared to when they were made on solid data.

    Read the article

  • How Service Component Architecture (SCA) Can Be Incorporated Into Existing Enterprise Systems

    After viewing Rob High’s presentation “The SOA Component Model” hosted on InfoQ.com, I can foresee how Service Component Architecture (SCA) can be incorporated in to an existing enterprise. According to IBM’s DeveloperWorks website, SCA is a set of conditions which outline a model for constructing applications/systems using a Service-Oriented Architecture (SOA). In addition, SCA builds on open standards such as Web services. In the future, I can easily see how some large IT shops could potently divide development teams or work groups up into Component/Data Object Groups, and Standard Development Groups. The Component/Data Object Group would only work on creating and maintaining components that are reused throughout the entire enterprise. The Standard Development Group would work on new and existing projects that incorporate the use of various components to accomplish various business tasks. In my opinion the incorporation of SCA in to any IT department will initially slow down the number of new features developed due to the time needed to create the new and loosely-coupled components. However once a company becomes more mature in its SCA process then the number of program features developed will greatly increase. I feel this is due to the fact that the loosely-coupled components needed in order to add the new features will already be built and ready to incorporate into any new development feature request. References: BEA Systems, Cape Clear Software, IBM, Interface21, IONA Technologies PLC, Oracle, Primeton Technologies Ltd, Progress Software, Red Hat Inc., Rogue Wave Software, SAP AG, Siebel Systems, Software AG, Sun Microsystems, Sybase, TIBCO Software Inc. (2006). Service Component Architecture. Retrieved 11 27, 2011, from DeveloperWorks: http://www.ibm.com/developerworks/library/specification/ws-sca/ High, R. (2007). The SOA Component Model. Retrieved 11 26, 2011, from InfoQ: http://www.infoq.com/presentations/rob-high-sca-sdo-soa-programming-model

    Read the article

  • Getting in to smart card programming

    - by Scott Chamberlain
    I have a Compaq nw8440 with a smart card reader that is: Compatible with ISO 7816 compliant Smart Cards. PC/SC interface support I have been interested in smart cards and wanted to start playing around with them. If I wanted to get in to programming smart cards where can I find resources on how to do it, and would I need any additional hardware other than what my laptop provides (besides the cards to program)?

    Read the article

  • Recording Available: March 2012 Quarterly Customer Update Webcast

    - by John Klinke
    Missed the recent Quarterly Customer Update Webcast? We covered several topics including: * WebCenter 4 Pillars overview * Support Update * WebCenter Content 11gR1 Update * WebCenter Portal 11gR1 Update * Oracle Social Network Overview VIEW WEBCAST RECORDING: Access the March 2012 Webcast recording and presentation by going to: My Oracle Support Site Note: 568127.1 We'll announce the next Quarterly Customer Update Webcast here on the WebCenter Content Alerts blog.

    Read the article

  • LDoms and Maintenance Mode

    - by Owen Allen
     I got a few questions about how maintenance mode works with LDoms. "I have a Control Domain that I need to do maintenance on. What does being put in maintenance mode actually do for a Control Domain?" Maintenance mode is what you use when you're going to be shutting a system down, or otherwise tinkering with it, and you don't want Ops Center to generate incidents and notification of incidents. Maintenance mode stops new incidents from being generated, but it doesn't stop polling, or monitoring, the system and it doesn't prevent alerts. "What does maintenance mode do with the guests on a Control Domain?" If you have auto recovery set and the Control Domain is a member of a server pool of eligible systems, putting the Control Domain in maintenance mode automatically migrates guests to an available Control Domain.  When a Control Domain is in maintenance mode, it is not eligible to receive guests and the placement policies for guest creation and for automatic recovery won't select this server as a possible destination. If there isn't a server pool or there aren't any eligible systems in the pool, the guests are shut down. You can select a logical domain from the Assets section to view the Dashboard for the virtual machine and the Automatic Recovery status, either Enabled or Disabled. To change the status, click the action in the Actions pane. "If I have to do maintenance on a system and I do not want to initiate auto-recovery, what do I have to do so that I can manually bring down the Control Domain (and all its Guest domains)?" Use the Disable Automatic Recovery action. "If I put a Control Domain into maintenance mode, does that also put the OS into maintenance mode?" No, just the Control Domain server. You have to put the OS into maintenance mode separately. "Also, is there an easy way to see what assets are in maintenance mode? Can we put assets into, or take them out of, maintenance mode on some sort of group level?" You can create a user-defined group that will automatically include assets in maintenance mode. The docs here explain how to set up these groups. You'll use a group rule that looks like this:

    Read the article

  • how to make audio and video streaming servers work?

    - by explorex
    I am a php mysql developer ... just an (below) average. and i am interested in the way television and radio are broadcasted over internet live. i want to know how it works and and what are its requirements (which package of which programming language offers the best). i must admit that i am a complete layman but i expect it do by next half month or year or so. And please clarify me Websites are stored in servers. From my desktop, i want to broadcast some video, then i need to connect to webserver(to upstream the video). Is there an application to do that (or do i have to code that or embed in my web application and which programming language would be suitable(does python support that))? and i also need a script to handle the upstreamed video or audio(can i do that with php)?

    Read the article

  • Why does my Canon printer print document pages at ~25% size?

    - by Erlend Alvestad
    I'm using a Canon PIXMA MP250, and I'm running 12.04 LTS. The printer's been working fine for the couple of months I've been a Linux user. That is, until today. I just printed a 1-page ODT document from LibreOffice. Instead of filling the sheet, the document occupies only a little less than 25% of the paper, in the top left corner, and the text has also shrunk to something like 5pt. I looked at the paper format settings for the document and printer, which were set to "letter". I changed these to "A4", hoping that would solve the issue. There was no change, however. I tried printing a different document in LibreOffice and got the same result. I tried exporting the original document to PDF and printing it through Document Viewer. Same result. I then printed a web page from Google Chrome. No formatting problems there. In all cases the print preview looks fine.

    Read the article

  • converting dates things from visual basic to c-sharp

    - by sinrtb
    So as an excercise in utility i've taken it upon myself to convert one of our poor old vb .net 1.1 apps to C# .net 4.0. I used telerik code conversion for a starting point and ended up with ~150 errors (not too bad considering its over 20k of code and rarely can i get it to run without an error using the production source) many of which deal with time/date in vb versus c#. my question is this how would you represent the following statement in VB If oStruct.AH_DATE <> #1/1/1900# Then in C#? The converter gave me if (oStruct.AH_DATE != 1/1/1900 12:00:00 AM) { which is of course not correct but I cannot seem to work out how to make it correct.

    Read the article

  • Data Management Business Continuity Planning

    Business Continuity Governance In order to ensure data continuity for an organization, they need to ensure they know how to handle a data or network emergency because all systems have the potential to fail. Data Continuity Checklist: Disaster Recovery Plan/Policy Backups Redundancy Trained Staff Business Continuity Policies In order to protect data in case of any emergency a company needs to put in place a Disaster recovery plan and policies that can be executed by IT staff to ensure the continuity of the existing data and/or limit the amount of data that is not contiguous.  A disaster recovery plan is a comprehensive statement of consistent actions to be taken before, during and after a disaster, according to Geoffrey H. Wold. He also states that the primary objective of disaster recovery planning is to protect the organization in the event that all or parts of its operations and/or computer services are rendered unusable. Furthermore, companies can mandate through policies that IT must maintain redundant hardware in case of any hardware failures and redundant network connectivity incase the primary internet service provider goes down.  Additionally, they can require that all staff be trained in regards to the Disaster recovery policy to ensure that all parties evolved are knowledgeable to execute the recovery plan. Business Continuity Procedures Business continuity procedure vary from organization to origination, however there are standard procedures that most originations should follow. Standard Business Continuity Procedures Backup and Test Backups to ensure that they work Hire knowledgeable and trainable staff  Offer training on new and existing systems Regularly monitor, test, maintain, and upgrade existing system hardware and applications Maintain redundancy regarding all data, and critical business functionality

    Read the article

  • Why do some opensouce libraries lack comments?

    - by entropy
    I don't know if this happens to most Opensource libraries, but many of I know and use (for example OpenSSL, Webkit, ...) they all lack comments, or contain very few comments. Not to mention their very few documents, it is hard to read their source code. We can hardly understand what a member variable means, or what this function does. This seems to be against coding standard practice Why is that? How can people collaborate to these opensource with very few comments?

    Read the article

  • SOA Forcing A Shift In IT Governance

    As more and more companies adopt a service oriented approach to developing and maintaining existing enterprise systems, IT governance also needs to shift its philosophies to fit the emerging development paradigm. When I first started programming companies placed an emphasis on “Code and Go” software development style. They only developed for current problems and did not really take a look at how the company could leverage some of the code we were developing across the entire enterprise system.  The concept of Service Oriented Architecture (SOA) has dramatically shifted how we develop enterprise software with emphasizing software processes as company assets. This has driven some to start developing new components as processes strictly for the possibility of future integration of existing and new systems. I personally like this new paradigm because it truly promotes code reusability. However, most enterprise level IT governance polices were created prior to the introduction of SOA in their respected organization. This can create a sense of the Wild West for developers working on projects related to SOA. This is due to the fact that a lot of the standards and polices implemented by enterprise IT governing boards were initially for developing under the “Code and Go” paradigm and do not take in to account idiosyncrasies found in the SOA/integration based development. As IT governance moves forward its focus should aim more for “Develop to Integrate” versus “Code and Go” philosophies. Examples of “Develop to Integrate” Philosophy: Defining preferred data transfer methodologies (XML vs. JSON), and when to use them Updating security best practices for exposing public services based on existing standard security policies Define when to use create new SOA project vs. implementing localized components that could be reused elsewhere in the enterprise.

    Read the article

  • Enterprise Service Bus (ESB): Important architectural piece to a SOA or is it just vendor hype?

    Is an Enterprise Service Bus (ESB) an important architectural piece to a Service-Oriented Architecture (SOA), or is it just vendor hype in order to sell a particular product such as SOA-in-a-box? According to IBM.com, an ESB is a flexible connectivity infrastructure for integrating applications and services; it offers a flexible and manageable approach to service-oriented architecture implementation. With this being said, it is my personal belief that ESBs are an important architectural piece to any SOA. Additionally, generic design patterns have been created around the integration of web services in to ESB regardless of any vendor. ESB design patterns, according to Philip Hartman, can be classified in to the following categories: Interaction Patterns: Enable service interaction points to send and/or receive messages from the bus Mediation Patterns: Enable the altering of message exchanges Deployment Patterns: Support solution deployment into a federated infrastructure Examples of Interaction Patterns: One-Way Message Synchronous Interaction Asynchronous Interaction Asynchronous Interaction with Timeout Asynchronous Interaction with a Notification Timer One Request, Multiple Responses One Request, One of Two Possible Responses One Request, a Mandatory Response, and an Optional Response Partial Processing Multiple Application Interactions Benefits of the Mediation Pattern: Mediator promotes loose coupling by keeping objects from referring to each other explicitly, and it lets you vary their interaction independently Design an intermediary to decouple many peers Promote the many-to-many relationships between interacting peers to “full object status” Examples of Interaction Patterns: Global ESB: Services share a single namespace and all service providers are visible to every service requester across an entire network Directly Connected ESB: Global service registry that enables independent ESB installations to be visible Brokered ESB: Bridges services that are reluctant to expose requesters or providers to ESBs in other domains Federated ESB: Service consumers and providers connect to the master or to a dependent ESB to access services throughout the network References: Mediator Design Pattern. (2011). Retrieved 2011, from SourceMaking.com: http://sourcemaking.com/design_patterns/mediator Hartman, P. (2006, 24 1). ESB Patterns that "Click". Retrieved 2011, from The Art and Science of Being an IT Architect: http://artsciita.blogspot.com/2006/01/esb-patterns-that-click.html IBM. (2011). WebSphere DataPower XC10 Appliance Version 2.0. Retrieved 2011, from IBM.com: http://publib.boulder.ibm.com/infocenter/wdpxc/v2r0/index.jsp?topic=%2Fcom.ibm.websphere.help.glossary.doc%2Ftopics%2Fglossary.html Oracle. (2005). 12 Interaction Patterns. Retrieved 2011, from Oracle® BPEL Process Manager Developer's Guide: http://docs.oracle.com/cd/B31017_01/integrate.1013/b28981/interact.htm#BABHHEHD

    Read the article

  • What is an Enterprise Resource Planning (ERP) System?

    In order to understand what an Enterprise Resource Planning System is let us look at a classic American kids snack, the Rice Krispy Treat if we conceptually view the treat as a company’s internal applications as a whole.  Furthermore we can view a company’s departmentalized software applications as the theoretical Rice Krispies in the treat. In addition, the Rice Krispies consist of a combination of ingredients that be broken down into data, user interfaces and business logic. Next, we have the margarine or butter that is used to help the marshmallows bind with the Rice Krispies; this role in our conceptual view is taken by a data source typically as a relational database management system. Finally we have the melted marshmallows which act as the ERP software that connects all of the individual departmental software applications in to one unified system that allows all user one unified system to interact with all of the individual dispersed systems. An example of this would be if a customer places an order with a telephone operator and once the orders is processed an employee in the shipping department can see the order ready for fulfillment on his order screen. The ERP acts a go between for various independent departmental systems so that they can integrate with one another.

    Read the article

  • Hiring New IT Employees versus Promoting Internally for IT Positions

    Recently I was asked my opinion regarding the hiring of IT professionals in regards to the option of hiring new IT employees versus promoting internally for IT positions. After thinking a little more about this question regarding staffing, specifically pertaining to promoting internally verses new employees; I think my answer to this question is that it truly depends on the situation. However, in most cases I would side with promoting internally. The key factors in this decision should be based on a company/department’s current values, culture, attitude, and existing priorities.  For example if a company values retaining all of its hard earned business knowledge then they would tend to promote existing employees internal over hiring a new employee. Moreover, the company will have to pay to train an existing employee to learn a new technology and the learning curve for some technologies can be very steep. Conversely, if a company values new technologies and technical proficiency over business knowledge then a company would tend to hire new employees because they may already have experience with a technology that the company is planning on using. In this scenario, the company would have to take on the additional overhead of allowing a new employee to learn how the business operates prior to them being fully effective. To illustrate my points above let us look at contractor that builds in ground pools for example.  He has the option to hire employees that are very strong but use small shovels to dig, or employees weak in physical strength but use large shovels to dig. Which employee should the contractor use to dig a hole for a new in ground pool? If we compare the possible candidates for this job we will find that they are very similar to hiring someone internally verses a new hire. The first example represents the existing workers that are very strong regarding the understanding how the business operates and the reasons why in a specific manner. However this employee could be potentially weaker than an outsider pertaining to specific technologies and would need some time to build their technical prowess for a new position much like the strong worker upgrading their shovels in order to remove more dirt at once when digging. The other employee is very similar to hiring a new person that may already have the large shovel but will need to increase their strength in order to use the shovel properly and efficiently so that they can move a maximum amount of dirt in a minimal amount of time. This can be compared to new employ learning how a business operates before they can be fully functional and integrated in the company/department. Another key factor in this dilemma pertains to existing employee and their passion for their work, their ability to accept new responsibility when given, and the willingness to take on responsibilities when they see a need in the business. As much as possible should be considered in this decision down to the mood of the team, the quality of existing staff, learning cure for both technology and business, and the potential side effects of the existing staff.  In addition, there are many more consideration based on the current team/department/companies culture and mood. There are several factors that need to be considered when promoting an individual or hiring new blood for a team. They both can provide great benefits as well as create controversy to a group. Personally, staffing especially in the IT world is like building a large scale system in that all of the components and modules must fit together and preform as one cohesive system in the same way a team must come together using their individually acquired skills so that they can work as one team.  If a module is out of place or is nonexistent then the rest of the team will suffer until the all of its issues are addressed and resolved. Benefits of Promoting Internally Internal promotions give employees a reason to constantly upgrade their technology, business, and communication skills if they want to further their career Employees can control their own destiny based on personal desires Employee already knows how the business operates Companies can save money by promoting internally because the initial overhead of allowing new hires to learn how a company operates is very expensive Newly promoted employees can assist in training their replacements while transitioning to their new role within a company. Existing employees already have a proven track record in regards fitting in with the business culture; this is always an unknown with all new hires Benefits of a New Hire New employees can energize and excite existing employees New employees can bring new ideas and advancements in technology New employees can offer a different perspective on existing issues based on their past experience. As you can see the decision to promote an existing employee from within a company verses hiring a new person should be based on several factors that should ultimately place the business in the best possible situation for the immediate and long term future. How would you handle this situation? Would you hire a new employee or promote from within?

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >