Search Results

Search found 5245 results on 210 pages for 'william hand'.

Page 198/210 | < Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >

  • How to Control Screen Layouts in LightSwitch

    - by ChrisD
    Visual Studio LightSwitch has a bunch of screen templates that you can use to quickly generate screens. They give you good starting points that you can customize further. When you add a new screen to your project you see a set of screen templates that you can choose from. These templates lay out all the related data you choose to put on a screen automatically for you. And don’t under estimate them; they do a great job of laying out controls in a smart way. For instance, a tab control will be used when you select more than one related set of data to display on a screen. However, you’re not limited to taking the layout as is. In fact, the screen designer is pretty flexible and allows you to create stacks of controls in a variety of configurations. You just need to visualize your screen as a series of containers that you can lay out in rows and columns. You then place controls or stacks of controls into these areas to align the screen exactly how you want. If you’re new in Visual Studio LightSwitch, you can see this tutorial. OK, Let’s start with a simple example. I have already designed my data entities for a simple order tracking system similar to the Northwind database. I also have added a Search Data  Screen to search my Products already. Now I will add a new Details Screen for my Products and make it the default screen via the “Add New Screen” dialog: The screen designer picks a simple layout for me based on the single entity I chose, in this case Product. Hit F5 to run the application, select a Product on the search screen to open the Product Details Screen. Notice that it’s pretty simple because my entity is simple. Click the “Customize” button in the top right of the screen so we can start tweaking it. The left side of the screen shows the containership of controls and data bindings (called the content tree) and the right side shows the live preview with data. Notice that we have a simple layout of two rows but only one row is populated (with a vertical stack of controls in this case). The bottom row is empty. You can envision the screen like this: Each container will display a group of data that you select. For instance in the above screen, the top row is set to a vertical stack control and the group of data to display is coming from Product. So when laying out screens you need to think in terms of containers of controls bound to groups of data. To change the data to which a container is bound, select the data item next to the container: You can select the “New Group” item in order to create more containers (or controls) within the current container. For instance to totally control the layout, select the Product in the top row and hit the delete key. This will delete the vertical stack and therefore all the controls on the screen. The content tree will still have two rows, but the rows are now both empty. If you want a layout of four containers (two rows and two columns) then select “New Group” for the data item and then change the vertical stack control to “Two Columns” for both of the rows as shown here: You can keep going on and on by selecting new groups and choosing between rows or columns. Here’s a layout with 8 containers, 4 rows and 2 columns: And here is a layout with 7 content areas; one row across the top of the screen and three rows with two columns below that: When you select Choose Content and select a data item like Product it will populate all the controls within the container (row or column in a vertical stack) however you have complete control on what to display within each group. You can delete fields you don’t want to display and/or change their controls. You can also change the size of controls and how they display by changing the settings in the properties window. If you are in the Screen Designer (and not the customization mode like we are here) you can also drag-drop data items from the left-hand side of the screen to the content tree. Note, however, that not all areas of the tree will allow you to drop a data item if there is a binding already set to a different set of data. For instance you can’t drop a Customer ID into the same group as a Product if they originate from different entities. To get around this, all you need to do is create a new group and content area as shown above. Let’s take a more complex example that deals with more than just product. I want to design a complex screen that displays Products and their Category, as well as all the OrderDetails for which that product is selected. This time I will create a new screen and select List and Details, select the Products screen data, and include the related OrderDetails. However I’m going to totally change the layout so that a Product grid is at the top left and below that is the selected Product detail. Below that will be the Category text fields and image in two columns below. On the right side I want the OrderDetails grid to take up the whole right side of the screen. All this can be done in customization mode while you’re debugging the application. To do this, I first deleted all the content items in the tree and then re-created the content tree as shown in the image below. I also set the image to be larger and the description textbox to be 5 rows using the property window below the live preview. I added the green lines to indicate the containers and show how it maps to the content tree (click to enlarge): I hope this demystifies the screen designer a little bit. Remember that screen templates are excellent starting points – you can take them as-is or customize them further. It takes a little fooling around with customizing screens to get them to do exactly what you want but there are a ton of possibilities once you get the hang of it. Stay tuned for more information on how to create your own screen templates that show up in the “Add New Screen” dialog. Enjoy! The tutorial that might be interested: Adding Custom Control In LightSwitch

    Read the article

  • Introduction to Human Workflow 11g

    - by agiovannetti
    Human Workflow is a component of SOA Suite just like BPEL, Mediator, Business Rules, etc. The Human Workflow component allows you to incorporate human intervention in a business process. You can use Human Workflow to create a business process that requires a manager to approve purchase orders greater than $10,000; or a business process that handles article reviews in which a group of reviewers need to vote/approve an article before it gets published. Human Workflow can handle the task assignment and routing as well as the generation of notifications to the participants. There are three common patterns or usages of Human Workflow: 1) Approval Scenarios: manage documents and other transactional data through approval chains . For example: approve expense report, vacation approval, hiring approval, etc. 2) Reviews by multiple users or groups: group collaboration and review of documents or proposals. For example, processing a sales quote which is subject to review by multiple people. 3) Case Management: workflows around work management or case management. For example, processing a service request. This could be routed to various people who all need to modify the task. It may also incorporate ad hoc routing which is unknown at design time. SOA 11g Human Workflow includes the following features: Assignment and routing of tasks to the correct users or groups. Deadlines, escalations, notifications, and other features required for ensuring the timely performance of a task. Presentation of tasks to end users through a variety of mechanisms, including a Worklist application. Organization, filtering, prioritization and other features required for end users to productively perform their tasks. Reports, reassignments, load balancing and other features required by supervisors and business owners to manage the performance of tasks. Human Workflow Architecture The Human Workflow component is divided into 3 modules: the service interface, the task definition and the client interface module. The Service Interface handles the interaction with BPEL and other components. The Client Interface handles the presentation of task data through clients like the Worklist application, portals and notification channels. The task definition module is in charge of managing the lifecycle of a task. Who should get the task assigned? What should happen next with the task? When must the task be completed? Should the task be escalated?, etc Stages and Participants When you create a Human Task you need to specify how the task is assigned and routed. The first step is to define the stages and participants. A stage is just a logical group. A participant can be a user, a group of users or an application role. The participants indicate the type of assignment and routing that will be performed. Stages can be sequential or in parallel. You can combine them to create any usage you require. See diagram below: Assignment and Routing There are different ways a task can be assigned and routed: Single Approver: task is assigned to a single user, group or role. For example, a vacation request is assigned to a manager. If the manager approves or rejects the request, the employee is notified with the decision. If the task is assigned to a group then once one of managers acts on it, the task is completed. Parallel : task is assigned to a set of people that must work in parallel. This is commonly used for voting. For example, a task gets approved once 50% of the participants approve it. You can also set it up to be a unanimous vote. Serial : participants must work in sequence. The most common scenario for this is management chain escalation. FYI (For Your Information) : task is assigned to participants who can view it, add comments and attachments, but can not modify or complete the task. Task Actions The following is the list of actions that can be performed on a task: Claim : if a task is assigned to a group or multiple users, then the task must be claimed first to be able to act on it. Escalate : if the participant is not able to complete a task, he/she can escalate it. The task is reassigned to his/her manager (up one level in a hierarchy). Pushback : the task is sent back to the previous assignee. Reassign :if the participant is a manager, he/she can delegate a task to his/her reports. Release : if a task is assigned to a group or multiple users, it can be released if the user who claimed the task cannot complete the task. Any of the other assignees can claim and complete the task. Request Information and Submit Information : use when the participant needs to supply more information or to request more information from the task creator or any of the previous assignees. Suspend and Resume :if a task is not relevant, it can be suspended. A suspension is indefinite. It does not expire until Resume is used to resume working on the task. Withdraw : if the creator of a task does not want to continue with it, for example, he wants to cancel a vacation request, he can withdraw the task. The business process determines what happens next. Renew : if a task is about to expire, the participant can renew it. The task expiration date is extended one week. Notifications Human Workflow provides a mechanism for sending notifications to participants to alert them of changes on a task. Notifications can be sent via email, telephone voice message, instant messaging (IM) or short message service (SMS). Notifications can be sent when the task status changes to any of the following: Assigned/renewed/delegated/reassigned/escalated Completed Error Expired Request Info Resume Suspended Added/Updated comments and/or attachments Updated Outcome Withdraw Other Actions (e.g. acquiring a task) Here is an example of an email notification: Worklist Application Oracle BPM Worklist application is the default user interface included in SOA Suite. It allows users to access and act on tasks that have been assigned to them. For example, from the Worklist application, a loan agent can review loan applications or a manager can approve employee vacation requests. Through the Worklist Application users can: Perform authorized actions on tasks, acquire and check out shared tasks, define personal to-do tasks and define subtasks. Filter tasks view based on various criteria. Work with standard work queues, such as high priority tasks, tasks due soon and so on. Work queues allow users to create a custom view to group a subset of tasks in the worklist, for example, high priority tasks, tasks due in 24 hours, expense approval tasks and more. Define custom work queues. Gain proxy access to part of another user's tasks. Define custom vacation rules and delegation rules. Enable group owners to define task dispatching rules for shared tasks. Collect a complete workflow history and audit trail. Use digital signatures for tasks. Run reports like Unattended tasks, Tasks productivity, etc. Here is a screenshoot of what the Worklist Application looks like. On the right hand side you can see the tasks that have been assigned to the user and the task's detail. References Introduction to SOA Suite 11g Human Workflow Webcast Note 1452937.2 Human Workflow Information Center Using the Human Workflow Service Component 11.1.1.6 Human Workflow Samples Human Workflow APIs Java Docs

    Read the article

  • JavaOne Session Report: “50 Tips in 50 Minutes for GlassFish Fans”

    - by Janice J. Heiss
    At JavaOne 2012 on Monday, Oracle’s Engineer Chris Kasso, and Technology Evangelist Arun Gupta, presented a head-spinning session (CON4701) in which they offered 50 tips for GlassFish fans. Kasso and Gupta alternated back and forth with each presenting 10 tips at a time. An audience of about (appropriately) 50 attentive and appreciative developers was on hand in what has to be one of the most information-packed sessions ever at JavaOne!Aside: I experienced one of the quiet joys of JavaOne when, just before the session began, I spotted Java Champion and JavaOne Rock Star Adam Bien sitting nearby – Adam is someone I have been fortunate to know for many years.GlassFish is a freely available, commercially supported Java EE reference implementation. The session prioritized quantity of tips over depth of information and offered tips that are intended for both seasoned and new users, that are meant to increase the range of functional options available to GlassFish users. The focus was on lesser-known dimensions of GlassFish. Attendees were encouraged to pursue tips that contained new information for them. All 50 tips can be accessed here.Below are several examples of more elaborate tips and a final practical tip on how to get in touch with these folks. Tip #1: Using the login Command * To execute a remote command with asadmin you must provide the admin's user name and password.* The login command allows you to store the login credentials to be reused in subsequent commands.* Can be logged into multiple servers (distinguish by host and port). Example:     % asadmin --host ouch login     Enter admin user name [default: admin]>     Enter admin password>     Login information relevant to admin user name [admin]     for host [ouch] and admin port [4848] stored at     [/Users/ckasso/.asadminpass] successfully.     Make sure that this file remains protected.     Information stored in this file will be used by     asadmin commands to manage the associated domain.     Command login executed successfully.     % asadmin --host ouch list-clusters     c1 not running     Command list-clusters executed successfully.Tip #4: Using the AS_DEBUG Env Variable* Environment variable to control client side debug output* Exposes: command processing info URL used to access the command:                           http://localhost:4848/__asadmin/uptime Raw response from the server Example:   % export AS_DEBUG=true  % asadmin uptime  CLASSPATH= ./../glassfish/modules/admin-cli.jar  Commands: [uptime]  asadmin extension directory: /work/gf-3.1.2/glassfish3/glassfish/lib/asadm      ------- RAW RESPONSE  ---------   Signature-Version: 1.0   message: Up 7 mins 10 secs   milliseconds_value: 430194   keys: milliseconds   milliseconds_name: milliseconds   use-main-children-attribute: false   exit-code: SUCCESS  ------- RAW RESPONSE  ---------Tip #11: Using Password Aliases * Some resources require a password to access (e.g. DB, JMS, etc.).* The resource connector is defined in the domain.xml.Example:Suppose the DB resource you wish to access requires an entry like this in the domain.xml:     <property name="password" value="secretp@ssword"/>But company policies do not allow you to store the password in the clear.* Use password aliases to avoid storing the password in the domain.xml* Create a password alias:     % asadmin create-password-alias DB_pw_alias     Enter the alias password>     Enter the alias password again>     Command create-password-alias executed successfully.* The password is stored in domain's encrypted keystore.* Now update the password value in the domain.xml:     <property name="password" value="${ALIAS=DB_pw_alias}"/>Tip #21: How to Start GlassFish as a Service * Configuring a server to automatically start at boot can be tedious.* Each platform does it differently.* The create-service command makes this easy.   Windows: creates a Windows service Linux: /etc/init.d script Solaris: Service Management Facility (SMF) service * Must execute create-service with admin privileges.* Can be used for the DAS or instances* Try it first with the --dry-run option.* There is a (unsupported) _delete-serverExample:     # asadmin create-service domain1     The Service was created successfully. Here are the details:     Name of the service:application/GlassFish/domain1     Type of the service:Domain     Configuration location of the service:/work/gf-3.1.2.2/glassfish3/glassfish/domains     Manifest file location on the system:/var/svc/manifest/application/GlassFish/domain1_work_gf-3.1.2.2_glassfish3_glassfish_domains/Domain-service-smf.xml.     You have created the service but you need to start it yourself. Here are the most typical Solaris commands of interest:     * /usr/bin/svcs  -a | grep domain1  // status     * /usr/sbin/svcadm enable domain1 // start     * /usr/sbin/svcadm disable domain1 // stop     * /usr/sbin/svccfg delete domain1 // uninstallTip #34: Posting a Command via REST* Use wget/curl to execute commands on the DAS.Example:  Deploying an application   % curl -s -S \       -H 'Accept: application/json' -X POST \       -H 'X-Requested-By: anyvalue' \       -F id=@/path/to/application.war \       -F force=true http://localhost:4848/management/domain/applications/application* Use @ before a file name to tell curl to send the file's contents.* The force option tells GlassFish to force the deployment in case the application is already deployed.* Use wget/curl to execute commands on the DAS.Example:  Deploying an application   % curl -s -S \       -H 'Accept: application/json' -X POST \       -H 'X-Requested-By: anyvalue' \       -F id=@/path/to/application.war \       -F force=true http://localhost:4848/management/domain/applications/application* Use @ before a file name to tell curl to send the file's contents.* The force option tells GlassFish to force the deployment in case the application is already deployed.Tip #46: Upgrading to a Newer Version * Upgrade applications and configuration from an earlier version* Upgrade Tool: Side-by-side upgrade– GUI: asupgrade– CLI: asupgrade --c– What happens ?* Copies older source domain -> target domain directory* asadmin start-domain --upgrade* Update Tool and pkg: In-place upgrade– GUI: updatetool, install all Available Updates– CLI: pkg image-update– Upgrade the domain* asadmin start-domain --upgradeTip #50: How to reach us?* GlassFish Forum: http://www.java.net/forums/glassfish/glassfish* [email protected]* @glassfish* facebook.com/glassfish* youtube.com/GlassFishVideos* blogs.oracle.com/theaquariumArun Gupta acknowledged that their method of presentation was experimental and actively solicited feedback about the session. The best way to reach them is on the GlassFish user forum.In addition, check out Gupta’s new book Java EE 6 Pocket Guide.

    Read the article

  • Notes on implementing Visual Studio 2010 Navigate To

    - by cyberycon
    One of the many neat functions added to Visual Studio in VS 2010 was the Navigate To feature. You can find it by clicking Edit, Navigate To, or by using the keyboard shortcut Ctrl, (yes, that's control plus the comma key). This pops up the Navigate To dialog that looks like this: As you type, Navigate To starts searching through a number of different search providers for your term. The entries in the list change as you type, with most providers doing some kind of fuzzy or at least substring matching. If you have C#, C++ or Visual Basic projects in your solution, all symbols defined in those projects are searched. There's also a file search provider, which displays all matching filenames from projects in the current solution as well. And, if you have a Visual Studio package of your own, you can implement a provider too. Micro Focus (where I work) provide the Visual COBOL language inside Visual Studio (http://visualstudiogallery.msdn.microsoft.com/ef9bc810-c133-4581-9429-b01420a9ea40 ), and we wanted to provide this functionality too. This post provides some notes on the things I discovered mainly through trial and error, but also with some kind help from devs inside Microsoft. The expectation of Navigate To is that it searches across the whole solution, not just the current project. So in our case, we wanted to search for all COBOL symbols inside all of our Visual COBOL projects inside the solution. So first of all, here's the Microsoft documentation on Navigate To: http://msdn.microsoft.com/en-us/library/ee844862.aspx . It's the reference information on the Microsoft.VisualStudio.Language.NavigateTo.Interfaces Namespace, and it lists all the interfaces you will need to implement to create your own Navigate To provider. Navigate To uses Visual Studio's latest mechanism for integrating external functionality and services, Managed Extensibility Framework (MEF). MEF components don't require any registration with COM or any other registry entries to be found by Visual Studio. Visual Studio looks in several well-known locations for manifest files (extension.vsixmanifest). It then uses reflection to scan for MEF attributes on classes in the assembly to determine which functionality the assembly provides. MEF itself is actually part of the .NET framework, and you can learn more about it here: http://mef.codeplex.com/. To get started with Visual Studio and MEF you could do worse than look at some of the editor examples on the VSX page http://archive.msdn.microsoft.com/vsx . I've also written a small application to help with switching between development and production MEF assemblies, which you can find on Codeproject: http://www.codeproject.com/KB/miscctrl/MEF_Switch.aspx. The Navigate To interfaces Back to Navigate To, and summarizing the MSDN reference documentation, you need to implement the following interfaces: INavigateToItemProviderFactoryThis is Visual Studio's entry point to your Navigate To implementation, and you must decorate your implementation with the following MEF export attribute: [Export(typeof(INavigateToItemProviderFactory))]  INavigateToItemProvider Your INavigateToItemProviderFactory needs to return your implementation of INavigateToItemProvider. This class implements StartSearch() and StopSearch(). StartSearch() is the guts of your provider, and we'll come back to it in a minute. This object also needs to implement IDisposeable(). INavigateToItemDisplayFactory Your INavigateToItemProvider hands back NavigateToItems to the NavigateTo framework. But to give you good control over what appears in the NavigateTo dialog box, these items will be handed back to your INavigateToItemDisplayFactory, which must create objects implementing INavigateToItemDisplay  INavigateToItemDisplay Each of these objects represents one result in the Navigate To dialog box. As well as providing the description and name of the item, this object also has a NavigateTo() method that should be capable of displaying the item in an editor when invoked. Carrying out the search The lifecycle of your INavigateToItemProvider is the same as that of the Navigate To dialog. This dialog is modal, which makes your implementation a little easier because you know that the user can't be changing things in editors and the IDE while this dialog is up. But the Navigate To dialog DOES NOT run on the main UI thread of the IDE – so you need to be aware of that if you want to interact with editors or other parts of the IDE UI. When the user invokes the Navigate To dialog, your INavigateToItemProvider gets sent a TryCreateNavigateToItemProvider() message. Instantiate your INavigateToItemProvider and hand this back. The sequence diagram below shows what happens next. Your INavigateToItemProvider will get called with StartSearch(), and passed an INavigateToCallback. StartSearch() is an asynchronous request – you must return from this method as soon as possible, and conduct your search on a separate thread. For each match to the search term, instantiate a NavigateToItem object and send it to INavigateToCallback.AddItem(). But as the user types in the Search Terms field, NavigateTo will invoke your StartSearch() method repeatedly with the changing search term. When you receive the next StartSearch() message, you have to abandon your current search, and start a new one. You can't rely on receiving a StopSearch() message every time. Finally, when the Navigate To dialog box is closed by the user, you will get a Dispose() message – that's your cue to abandon any uncompleted searches, and dispose any resources you might be using as part of your search. While you conduct your search invoke INavigateToCallback.ReportProgress() occasionally to provide feedback about how close you are to completing the search. There does not appear to be any particular requirement to how often you invoke ReportProgress(), and you report your progress as the ratio of two integers. In my implementation I report progress in terms of the number of symbols I've searched over the total number of symbols in my dictionary, and send a progress report every 16 symbols. Displaying the Results The Navigate to framework invokes INavigateToItemDisplayProvider.CreateItemDisplay() once for each result you passed to the INavigateToCallback. CreateItemDisplay() is passed the NavigateToItem you handed to the callback, and must return an INavigateToItemDisplay object. NavigateToItem is a sealed class which has a few properties, including the name of the symbol. It also has a Tag property, of type object. This enables you to stash away all the information you will need to create your INavigateToItemDisplay, which must implement an INavigateTo() method to display a symbol in an editor IDE when the user double-clicks an entry in the Navigate To dialog box. Since the tag is of type object, it is up to you, the implementor, to decide what kind of object you store in here, and how it enables the retrieval of other information which is not included in the NavigateToItem properties. Some of the INavigateToItemDisplay properties are self-explanatory, but a couple of them are less obvious: Additional informationThe string you return here is displayed inside brackets on the same line as the Name property. In English locales, Visual Studio includes the preposition "of". If you look at the first line in the Navigate To screenshot at the top of this article, Book_WebRole.Default is the additional information for textBookAuthor, and is the namespace qualified type name the symbol appears in. For procedural COBOL code we display the Program Id as the additional information DescriptionItemsYou can use this property to return any textual description you want about the item currently selected. You return a collection of DescriptionItem objects, each of which has a category and description collection of DescriptionRun objects. A DescriptionRun enables you to specify some text, and optional formatting, so you have some control over the appearance of the displayed text. The DescriptionItems property is displayed at the bottom of the Navigate To dialog box, with the Categories on the left and the Descriptions on the right. The Visual COBOL implementation uses it to display more information about the location of an item, making it easier for the user to know disambiguate duplicate names (something there can be a lot of in large COBOL applications). Summary I hope this article is useful for anyone implementing Navigate To. It is a fantastic navigation feature that Microsoft have added to Visual Studio, but at the moment there still don't seem to be any examples on how to implement it, and the reference information on MSDN is a little brief for anyone attempting an implementation.

    Read the article

  • At times, you need to hire a professional.

    - by Phil Factor
    After months of increasingly demanding toil, the development team I belonged to was told that the project was to be canned and the whole team would be fired.  I’d been brought into the team as an expert in the data implications of a business re-engineering of a major financial institution. Nowadays, you’d call me a data architect, I suppose.  I’d spent a happy year being paid consultancy fees solving a succession of interesting problems until the point when the company lost is nerve, and closed the entire initiative. The IT industry was in one of its characteristic mood-swings downwards.  After the announcement, we met in the canteen. A few developers had scented the smell of death around the project already hand had been applying unsuccessfully for jobs. There was a sense of doom in the mass of dishevelled and bleary-eyed developers. After giving vent to anger and despair, talk turned to getting new employment. It was then that I perked up. I’m not an obvious choice to give advice on getting, or passing,  IT interviews. I reckon I’ve failed most of the job interviews I’ve ever attended. I once even failed an interview for a job I’d already been doing perfectly well for a year. The jobs I’ve got have mostly been from personal recommendation. Paradoxically though, from years as a manager trying to recruit good staff, I know a lot about what IT managers are looking for.  I gave an impassioned speech outlining the important factors in getting to an interview.  The most important thing, certainly in my time at work is the quality of the résumé or CV. I can’t even guess the huge number of CVs (résumés) I’ve read through, scanning for candidates worth interviewing.  Many IT Developers find it impossible to describe their  career succinctly on two sides of paper.  They leave chunks of their life out (were they in prison?), get immersed in detail, put in irrelevancies, describe what was going on at work rather than what they themselves did, exaggerate their importance, criticize their previous employers, aren’t  aware of the important aspects of a role to a potential employer, suffer from shyness and modesty,  and lack any sort of organized perspective of their work. There are many ways of failing to write a decent CV. Many developers suffer from the delusion that their worth can be recognized purely from the code that they write, and shy away from anything that seems like self-aggrandizement. No.  A resume must make a good impression, which means presenting the facts about yourself in a clear and positive way. You can’t do it yourself. Why not have your resume professionally written? A good professional CV Writer will know the qualities being looked for in a CV and interrogate you to winkle them out. Their job is to make order and sense out of a confused career, to summarize in one page a mass of detail that presents to any recruiter the information that’s wanted. To stand back and describe an accurate summary of your skills, and work-experiences dispassionately, without rancor, pity or modesty. You are no more capable of producing an objective documentation of your career than you are of taking your own appendix out.  My next recommendation was more controversial. This is to have a professional image overhaul, or makeover, followed by a professionally-taken photo portrait. I discovered this by accident. It is normal for IT professionals to face impossible deadlines and long working hours by looking more and more like something that had recently blocked a sink. Whilst working in IT, and in a state of personal dishevelment, I’d been offered the role in a high-powered amateur production of an old ex- Broadway show, purely for my singing voice. I was supposed to be the presentable star. When the production team saw me, the air was thick with tension and despair. I was dragged kicking and protesting through a succession of desperate grooming, scrubbing, dressing, dieting. I emerged feeling like “That jewelled mass of millinery, That oiled and curled Assyrian bull, Smelling of musk and of insolence.” (Tennyson Maud; A Monodrama (1855) Section v1 stanza 6) I was then photographed by a professional stage photographer.  When the photographs were delivered, I was amazed. It wasn’t me, but it looked somehow respectable, confident, trustworthy.   A while later, when the show had ended, I took the photos, and used them for work. They went with the CV to job applications. It did the trick better than I could ever imagine.  My views went down big with the developers. Old rivalries were put immediately to one side. We voted, with a show of hands, to devote our energies for the entire notice period to getting employable. We had a team sourcing the CV Writer,  a team organising the make-overs and photographer, and a third team arranging  mock interviews. A fourth team determined the best websites and agencies for recruitment, with the help of friends in the trade.  Because there were around thirty developers, we were in a good negotiating position.  Of the three CV Writers we found who lived locally, one proved exceptional. She was an ex-journalist with an eye to detail, and years of experience in manipulating language. We tried her skills out on a developer who seemed a hopeless case, and he was called to interview within a week.  I was surprised, too, how many companies were experts at image makeovers. Within the month, we all looked like those weird slick  people in the ‘Office-tagged’ stock photographs who stare keenly and interestedly at PowerPoint slides in sleek chromium-plated high-rise offices. The portraits we used still adorn the entries of many of my ex-colleagues in LinkedIn. After a months’ worth of mock interviews, and technical Q&A, our stutters, hesitations, evasions and periphrastic circumlocutions were all gone.  There is little more to relate. With the résumés or CVs, mugshots, and schooling in how to pass interviews, we’d all got new and better-paid jobs well  before our month’s notice was ended. Whilst normally, an IT team under the axe is a sad and depressed place to belong to, this wonderful group of people had proved the power of organized group action in turning the experience to advantage. It left us feeling slightly guilty that we were somehow cheating, but I guess we were merely leveling the playing-field.

    Read the article

  • Combining Shared Secret and Certificates

    - by Michael Stephenson
    As discussed in the introduction article this walkthrough will explain how you can implement WCF security with the Windows Azure Service Bus to ensure that you can protect your endpoint in the cloud with a shared secret but also combine this with certificates so that you can identify the sender of the message.   Prerequisites As in the previous article before going into the walk through I want to explain a few assumptions about the scenario we are implementing but to keep the article shorter I am not going to walk through all of the steps in how to setup some of this. In the solution we have a simple console application which will represent the client application. There is also the services WCF application which contains the WCF service we will expose via the Windows Azure Service Bus. The WCF Service application in this example was hosted in IIS 7 on Windows 2008 R2 with AppFabric Server installed and configured to auto-start the WCF listening services. I am not going to go through significant detail around the IIS setup because it should not matter in relation to this article however if you want to understand more about how to configure WCF and IIS for such a scenario please refer to the following paper which goes into a lot of detail about how to configure this. The link is: http://tinyurl.com/8s5nwrz   Setting up the Certificates To keep the post and sample simple I am going to use the local computer store for all certificates but this bit is really just the same as setting up certificates for an example where you are using WCF without using Windows Azure Service Bus. In the sample I have included two batch files which you can use to create the sample certificates or remove them. Basically you will end up with: A certificate called PocServerCert in the personal store for the local computer which will be used by the WCF Service component A certificate called PocClientCert in the personal store for the local computer which will be used by the client application A root certificate in the Root store called PocRootCA with its associated revocation list which is the root from which the client and server certificates were created   For the sample Im just using development certificates like you would normally, and you can see exactly how these are configured and placed in the stores from the batch files in the solution using makecert and certmgr.   The Service Component To begin with let's look at the service component and how it can be configured to listen to the service bus using a shared secret but to also accept a username token from the client. In the sample the service component is called Acme.Azure.ServiceBus.Poc.Cert.Services. It has a single service which is the Visual Studio template for a WCF service when you add a new WCF Service Application so we have a service called Service1 with its Echo method. Nothing special so far!.... The next step is to look at the web.config file to see how we have configured the WCF service. In the services section of the WCF configuration you can see I have created my service and I have created a local endpoint which I simply used to do a little bit of diagnostics and to check it was working, but more importantly there is the Windows Azure endpoint which is using the ws2007HttpRelayBinding (note that this should also work just the same if your using netTcpRelayBinding). The key points to note on the above picture are the service behavior called MyServiceBehaviour and the service bus endpoints behavior called MyEndpointBehaviour. We will go into these in more detail later.   The Relay Binding The relay binding for the service has been configured to use the TransportWithMessageCredential security mode. This is the important bit where the transport security really relates to the interaction between the service and listening to the Azure Service Bus and the message credential is where we will use our certificate like we have specified in the message/clientCrentialType attribute. Note also that we have left the relayClientAuthenticationType set to RelayAccessToken. This means that authentication will be made against ACS for accessing the service bus and messages will not be accepted from any sender who has not been authenticated by ACS.   The Endpoint Behaviour In the below picture you can see the endpoint behavior which is configured to use the shared secret client credential for accessing the service bus and also for diagnostic purposes I have included the service registry element.     Hopefully if you are familiar with using Windows Azure Service Bus relay feature the above is very familiar to you and this is a very common setup for this section. There is nothing specific to the username token implementation here. The Service Behaviour Now we come to the bit with most of the certificate stuff in it. When you configure the service behavior I have included the serviceCredentials element and then setup to use the clientCertificate check and also specifying the serviceCertificate with information on how to find the servers certificate in the store.     I have also added a serviceAuthorization section where I will implement my own authorization component to perform additional security checks after the service has validated that the message was signed with a good certificate. I also have the same serviceSecurityAudit configuration to log access to my service. My Authorization Manager The below picture shows you implementation of my authorization manager. WCF will eventually hand off the message to my authorization component before it calls the service code. This is where I can perform some logic to check if the identity is allowed to access resources. In this case I am simple rejecting messages from anyone except the PocClientCertificate.     The Client Now let's take a look at the client side of this solution and how we can configure the client to authenticate against ACS but also send a certificate over to the service component so it can implement additional security checks on-premise. I have a console application and in the program class I want to use the proxy generated with Add Service Reference to send a message via the Azure Service Bus. You can see in my WCF client configuration below I have setup my details for the azure service bus url and am using the ws2007HttpRelayBinding.   Next is my configuration for the relay binding. You can see below I have configured security to use TransportWithMessageCredential so we will flow the token from a certificate with the message and also the RelayAccessToken relayClientAuthenticationType which means the component will validate against ACS before being allowed to access the relay endpoint to send a message.     After the binding we need to configure the endpoint behavior like in the below picture. This contains the normal transportClientEndpointBehaviour to setup the ACS shared secret configuration but we have also configured the clientCertificate to look for the PocClientCert.     Finally below we have the code of the client in the console application which will call the service bus. You can see that we have created our proxy and then made a normal call to a WCF in exactly the normal way but the configuration will jump in and ensure that a token is passed representing the client certificate.     Conclusion As you can see from the above walkthrough it is not too difficult to configure a service to use both a shared secret and certificate based token at the same time. This gives you the power and protection offered by the access control service in the cloud but also the ability to flow additional tokens to the on-premise component for additional security features to be implemented. Sample The sample used in this post is available at the following location: https://s3.amazonaws.com/CSCBlogSamples/Acme.Azure.ServiceBus.Poc.Cert.zip

    Read the article

  • I'm a contract developer and I think I'm about to get screwed [closed]

    - by kagaku
    I do contract development on the side. You could say that I'm a contract developer? Considering I've only ever had one client I'd say that's not exactly the truth - more like I took a side job and needed some extra cash. It started out as a "rebuild our website and we'll pay you $10k" type project. Once that was complete (a bit over schedule, but certainly not over budget), the company hired me on as a "long term support" contractor. The contract is to go from March of this year, expiring on December 31st of this year - 10 months. Over which a payment is to be paid on the 30th of each month for a set amount. I've been fulfilling my end of the contract on all points - doing server maintenence, application and database changes, doing huge rush changes and pretty much just going above and beyond. Currently I'm in the middle of development of an iPhone mobile application (PhoneGap based) which is nearing completion (probably 3-4 weeks from submission). It has not been all peaches and flowers though. Each and every month when my paycheck comes due, there always seems to be an issue of sorts. These issues did not occur during the initial project, only during the support contract. The actual contract states that my check should be mailed out on the 30th of the month. I have received my check on time approximately once (on time being about 2-3 days within the 30th). I've received my paycheck as late as the 15th of the next month - over two weeks late. I've put up with it because I need the paycheck. There have been promises and promises of "we'll send it out on time next time! I promise" - only to receive it just as late the next month. When I ask about payment they give me a vibe like "why are you only worried about money?" - unfortunately I don't have the luxury of not worrying about money. The last straw was with my August payment, which should have been mailed on August 30th. I received it on September 12th. The reason for the delay? "USPS is delaying it man! we sent it out on the 1st!" is the reason I got. When I finally got the check in the mail, the postage on the envelope was marked September 10th - the date it was run through the postage machine. I've been outright lied to, at this point. I carry on working, because again - I need a paycheck. I orchestrated the move of our application to a new server, developed a bunch of new changes and continued work on the iPhone app. All told I probably went over my hourly allotment (I'm paid for 40 hours a month, I probably put in at least 50). On Saturday, the 1st, I gave the main contact at the company (a company of 3, by the way - this is not some big corporation) a ring and filled him in on the status of my work for the past two weeks. Unusually I hadn't heard from him since the middle of September. His response was "oh... well, that is nice and uh.. good job. well, we've been talking within the company about things and we've certainly got some decisions ahead of us..." - not verbatim but you get the idea (I hope?). I got out of this conversation that the site is not doing very well (which it's not) and they're considering pulling the plug. Crap, this contract is going to end early - there goes Christmas! Fine, that's alright, no problem. I'll get paid for the last months work and call it a day. Unfortunately I still haven't gotten last months check, and I'm getting dicked around now. "Oh.. we had problems transferring funds, we'll try and mail it out tomorrow" and "I left a VM with the finance guy, but I can't get ahold of him". So I'm getting the feeling I'm not getting paid for all the work I put in for September. This is obviously breach of contract, and I am pissed. Thinking irrationally, I considered changing all their passwords and holding their stuff hostage. Before I think it through (by the way, I am NOT going to do this, realized it would probably get me in trouble), I go and try some passwords for our various accounts. Google Apps? Oh, I'm no longer administrator here. Godaddy? Whoops, invalid password. Disqus? Nope, invalid password here too. Google Adsense / Analytics? Invalid password. Dedicated server account manager? Invalid password. Now, I have the servers root password - I just built the box last week and haven't had a chance to send the guy the root password. Wasn't in a rush, I manage the server and they never touch it. Now all of a sudden all the passwords except this one are changed; the writing is on the wall - I am out. Here's the conundrum. I have the root password, they do not. If I give them this password all the leverage I have is gone, out the door and out of my hands. During this argument of why am I not getting paid the guy sends me an email saying "oh by the way, what's the root username and password to the server?". Considering he knows absolutely nothing, I gave him an "admin" account which really has almost no rights. I still have exclusive access to the server, I just don't know where to go. I can hold their data hostage, but I'm almost positive this is the wrong thing to do. I'd consider it blackmail, regardless of whether or not I have gotten paid yet. I can "break" something on the server and give them the whole "well, if you were paying me I could fix it!" spiel. This works from a "well he's not holding their stuff hostage" point of view, but what stops them from hiring some one else to just fix the issue at hand? For all I know the guys nephew is a "l33t hax0r" and can figure it out for free. I can give in, document as much as I can and take him to small claims court. This is breach of contract, I'm not getting paid. I have a case, right? ???? Does anyone have any experience in this? What can I do? What are my options? I'm broke, I can't afford a lawyer and I can barely afford not getting this paycheck. My wife doesn't work (I work two jobs so she doesn't have to work - we have a 1 year old) and is already looking at getting a part time job to cover the bills. Long term we'll be fine, but this has pissed me off beyond belief! Help me out, I'm about to get screwed.

    Read the article

  • Sending notification after an event has remained open for a specified period

    - by Loc Nhan
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Enterprise Manager (EM) 12c allows you to create an incident rule to send a notification and/or create an incident after an event has been open for a specified period. Such an incident rule will help prevent premature alerts on issues that may correct themselves within a certain amount of time. For example, there are some agents in an unstable network area, and often there are communication failures between the agents and the OMS lasting three, four minutes at a time. In this scenario, you may only want to receive alerts after an agent in that area has been in the Agent Unreachable status for at least five minutes. Note: Many non-target availability metrics allow users to specify the “number of occurrences” or the number of consecutive times metric values reach thresholds before a notification is sent. It is best to use the feature for such metrics. This article provides a step-by-step guide for creating an incident rule set to cater for the above scenario, that is, to create an incident and send a notification after the Agent Unreachable event has remained open for a five-minute duration. Steps to create the incident rule 1.     Log on to the console and navigate to Setup -> Incidents -> Incident Rules. Note: A non-super user requires the Create Enterprise Rule Set privilege, which is a resource privilege, to create an incident rule. The Incident Rules - All Enterprise Rules page displays. 2.     Click Create Rule Set … The Create Rule Set page displays. 3.     Enter a name for the rule set (e.g. Rule set for agents in flaky network areas), optionally enter a description, and leave everything else at default values, and click + Add. The Search and Select: Targets page pops up. Note:  While you can create a rule set for individual targets, it is a best practice to use a group for this purpose. 4.     Select an appropriate group, e.g. the AgentsInFlakyNework group. The Select button becomes enabled, click the button. The Create Rule Set page displays. 5.     Leave everything at default values, and click the Rules tab. The Create Rule Set page displays. 6.     Click Create… The Select Type of Rule to Create page pops up. 7.     Leave the Incoming events and updates to events option selected, and click Continue. The Create New Rule : Select Events page displays. 8.     Select Target Availability from the Type drop-down list. The page shows more options for Target Availability. 9.     Select the Specific events of type Target Availability option, and click + Add. The Select Target Availability events page pops up. 10.   Select Agent from the Target Type dropdown list. The page expands. 11.   Click the Agent unreachable checkbox, and click OK. Note: If you want to also receive a notification when the event is cleared, click the Agent unreachable end checkbox as well before clicking OK. The Create New Rule : Select Events page displays. 12.   Click Next. The Create New Rule : Add Actions page displays. 13.   Click + Add. The Add Actions page displays. 14.   Do the following: a.     Select the Only execute the actions if specified conditions match option (You don’t want the action to trigger always). The following options appear in the Conditions for Actions section. b.     Select the Event has been open for specified duration option. The Conditions for actions section expands. c.     Change the values of Event has been open for to 5 Minutes as shown below. d.     In the Create Incident or Update Incident section, click the Create Incident checkbox as following: e.     In the Notifications section, enter an appropriate EM user or email address in the E-mail To field. f.     Click Continue (in the top right hand corner). The Create New Rule : Add Actions page displays. 15.   Click Next. The Create New Rule : Specify name and Description page displays. 16.   Enter a rule name, and click Next. The Create New Rule : Review page appears. 17.   Click Continue, and proceed to save the rule set. The incident rule set creation completes. After one of the agents in the group specified in the rule set is stopped for over 5 minutes, EM will send a mail notification and create an incident as shown in the following screenshot. In conclusion, you have seen the steps to create an example incident rule set that only creates an incident and triggers a notification after an event has been open for a specified period. Such an incident rule can help prevent unnecessary incidents and alert notifications leaving EM administrators time to more important tasks. - Loc Nhan

    Read the article

  • HAProxy: Display a "BADREQ" | BADREQ's by the thousands

    - by GruffTech
    My HAProxy Configuration. #HA-Proxy version 1.3.22 2009/10/14 Copyright 2000-2009 Willy Tarreau <[email protected]> global maxconn 10000 spread-checks 50 user haproxy group haproxy daemon stats socket /tmp/haproxy log localhost local0 log localhost local1 notice defaults mode http maxconn 50000 timeout client 10000 option forwardfor except 127.0.0.1 option httpclose option httplog listen dcaustin 0.0.0.0:80 mode http timeout connect 12000 timeout server 60000 timeout queue 120000 balance roundrobin option httpchk GET /index.html log global option httplog option dontlog-normal server web1 10.10.10.101:80 maxconn 300 check fall 1 server web2 10.10.10.102:80 maxconn 300 check fall 1 server web3 10.10.10.103:80 maxconn 300 check fall 1 server web4 10.10.10.104:80 maxconn 300 check fall 1 listen stats 0.0.0.0:9000 mode http balance log global timeout client 5000 timeout connect 4000 timeout server 30000 stats uri /haproxy HAProxy is running, and the socket is working... adam@dcaustin:/etc/haproxy# echo "show info" | socat stdio /tmp/haproxy Name: HAProxy Version: 1.3.22 Release_date: 2009/10/14 Nbproc: 1 Process_num: 1 Pid: 6320 Uptime: 0d 0h14m58s Uptime_sec: 898 Memmax_MB: 0 Ulimit-n: 20017 Maxsock: 20017 Maxconn: 10000 Maxpipes: 0 CurrConns: 47 PipesUsed: 0 PipesFree: 0 Tasks: 51 Run_queue: 1 node: dcaustin desiption: Errors show nothing from socket... adam@dcaustin:/etc/haproxy# echo "show errors" | socat stdio /tmp/haproxy adam@dcaustin:/etc/haproxy# However... My Error log is exploding with "badrequests" with the Error code cR. cR (according to 1.3 documentation) is The "timeout http-request" stroke before the client sent a full HTTP request. This is sometimes caused by too large TCP MSS values on the client side for PPPoE networks which cannot transport full-sized packets, or by clients sending requests by hand and not typing fast enough, or forgetting to enter the empty line at the end of the request. The HTTP status code is likely a 408 here. Correct on the 408, but we're getting literally thousands of these requests every hour. (This log snippet is an clip for about 10 seconds of time...) Jun 30 11:08:52 localhost haproxy[6320]: 92.22.213.32:26448 [30/Jun/2011:11:08:42.384] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10002 408 212 - - cR-- 35/35/18/0/0 0/0 "<BADREQ>" Jun 30 11:08:54 localhost haproxy[6320]: 71.62.130.24:62818 [30/Jun/2011:11:08:44.457] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10001 408 212 - - cR-- 39/39/16/0/0 0/0 "<BADREQ>" Jun 30 11:08:55 localhost haproxy[6320]: 84.73.75.236:3589 [30/Jun/2011:11:08:45.021] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10008 408 212 - - cR-- 35/35/15/0/0 0/0 "<BADREQ>" Jun 30 11:08:55 localhost haproxy[6320]: 69.39.20.190:49969 [30/Jun/2011:11:08:45.709] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10000 408 212 - - cR-- 37/37/16/0/0 0/0 "<BADREQ>" Jun 30 11:08:56 localhost haproxy[6320]: 2.29.0.9:58772 [30/Jun/2011:11:08:46.846] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10001 408 212 - - cR-- 43/43/22/0/0 0/0 "<BADREQ>" Jun 30 11:08:57 localhost haproxy[6320]: 212.139.250.242:57537 [30/Jun/2011:11:08:47.568] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10000 408 212 - - cR-- 42/42/21/0/0 0/0 "<BADREQ>" Jun 30 11:08:58 localhost haproxy[6320]: 74.79.195.75:55046 [30/Jun/2011:11:08:48.559] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10000 408 212 - - cR-- 46/46/24/0/0 0/0 "<BADREQ>" Jun 30 11:08:58 localhost haproxy[6320]: 74.79.195.75:55044 [30/Jun/2011:11:08:48.554] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10004 408 212 - - cR-- 45/45/24/0/0 0/0 "<BADREQ>" Jun 30 11:08:58 localhost haproxy[6320]: 74.79.195.75:55045 [30/Jun/2011:11:08:48.554] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10005 408 212 - - cR-- 44/44/24/0/0 0/0 "<BADREQ>" Jun 30 11:09:00 localhost haproxy[6320]: 68.197.56.2:52781 [30/Jun/2011:11:08:50.975] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10000 408 212 - - cR-- 49/49/28/0/0 0/0 "<BADREQ>" From what I read on google, if i wanted to see what the bad requests are, I can show errors to the socket and it will spit them out. We do run a pretty heavily trafficed website and the percentage of "BADREQS" to normal requests is quite low, but I'd like to be able to get ahold of what that request WAS so I can debug it. stats # pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max, dcaustin,FRONTEND,,,64,120,50000,88433,105889100,2553809875,0,0,4641,,,,,OPEN,,,,,,,,,1,1,0,,,,0,45,0,128, dcaustin,web1,0,0,10,28,300,20941,25402112,633143416,,0,,0,3,0,0,UP,1,1,0,0,0,2208,0,,1,1,1,,20941,,2,11,,30, dcaustin,web2,0,0,9,30,300,20941,25026691,641475169,,0,,0,3,0,0,UP,1,1,0,0,0,2208,0,,1,1,2,,20941,,2,11,,30, dcaustin,web3,0,0,10,27,300,20940,30116527,635015040,,0,,0,9,0,0,UP,1,1,0,0,0,2208,0,,1,1,3,,20940,,2,10,,31, dcaustin,web4,0,0,5,28,300,20940,25343770,643209546,,0,,0,8,0,0,UP,1,1,0,0,0,2208,0,,1,1,4,,20940,,2,11,,31, dcaustin,BACKEND,0,0,34,95,50000,83762,105889100,2553809875,0,0,,0,34,0,0,UP,4,4,0,,0,2208,0,,1,1,0,,83762,,1,43,,122, 88500 "Sessions" and 4500 errors. in the last 20 minutes.

    Read the article

  • Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/xmlbeans/XmlException

    - by Dheeraj kumar
    Hi, I have to read xls file in java.I used poi-3.6 to read xls file in Eclipse.But i m getting this ERROR"Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/xmlbeans/XmlException at ReadExcel2.main(ReadExcel2.java:38)". I have added following jars 1)poi-3.6-20091214.jar 2)poi-contrib-3.6-20091214.jar 3)poi-examples-3.6-20091214.jar 4)poi-ooxml-3.6-20091214.jar 5)poi-ooxml-schemas-3.6-20091214.jar 6)poi-scratchpad-3.6-20091214.jar Below is the code which i m using: import org.apache.poi.ss.usermodel.Workbook; import org.apache.poi.ss.usermodel.Sheet; import org.apache.poi.ss.usermodel.Row; import org.apache.poi.ss.usermodel.Cell; import org.apache.poi.xssf.usermodel.XSSFWorkbook; import org.apache.poi.xssf.usermodel.XSSFCell; import org.apache.poi.xssf.usermodel.XSSFRow; import java.io.FileInputStream; import java.io.IOException; import java.util.Iterator; import java.util.List; import java.util.ArrayList; public class ReadExcel { public static void main(String[] args) throws Exception { // // An excel file name. You can create a file name with a full path // information. // String filename = "C:\\myExcel.xl"; // // Create an ArrayList to store the data read from excel sheet. // List sheetData = new ArrayList(); FileInputStream fis = null; try { // // Create a FileInputStream that will be use to read the excel file. // fis = new FileInputStream(filename); // // Create an excel workbook from the file system. // // HSSFWorkbook workbook = new HSSFWorkbook(fis); Workbook workbook = new XSSFWorkbook(fis); // // Get the first sheet on the workbook. // Sheet sheet = workbook.getSheetAt(0); // // When we have a sheet object in hand we can iterator on each // sheet's rows and on each row's cells. We store the data read // on an ArrayList so that we can printed the content of the excel // to the console. // Iterator rows = sheet.rowIterator(); while (rows.hasNext()) { Row row = (XSSFRow) rows.next(); Iterator cells = row.cellIterator(); List data = new ArrayList(); while (cells.hasNext()) { Cell cell = (XSSFCell) cells.next(); data.add(cell); } sheetData.add(data); } } catch (IOException e) { e.printStackTrace(); } finally { if (fis != null) { fis.close(); } } showExelData(sheetData); } private static void showExelData(List sheetData) { // // Iterates the data and print it out to the console. // for (int i = 0; i < sheetData.size(); i++) { List list = (List) sheetData.get(i); for (int j = 0; j < list.size(); j++) { Cell cell = (XSSFCell) list.get(j); System.out.print(cell.getRichStringCellValue().getString()); if (j < list.size() - 1) { System.out.print(", "); } } System.out.println(""); } } } Please help. thanks in anticipation, Regards, Dheeraj!

    Read the article

  • Edit and render RichText

    - by OregonGhost
    We have an application (a custom network management tool for building automation) that supports printing labels that you can cut out and insert into the devices' front displays. In earlier versions of the tool (not developed in my company), the application just pushed the strings into an Excel file that the field technician could then manipulate (like formatting text). We didn't do this in the new version because it was hard (impossible) to keep the Excel file in sync, and to avoid a binding to an external application (let alone different versions of Excel). We're using PDFSharp for rendering the labels. It has a System.Drawing-like interface, but can output to a System.Drawing.Graphics (screen / printer) as well as to a PDF file, which is a requirement. Later, basic formatting was introduced like Font Family, Style, Size, Color which would apply to one label (i.e. to exactly one string). Now the customer wants to be able to apply these formats to single characters in a string. I think the easiest way would be to support a subset of RichText. It's not as easy as I thought though. Currently the editor just displays a TextBox for the label you want to edit, with the font set to the label's font. I thought I'd just replace it with RichTextBox, and update the formatting buttons to use the RichTextBox formatting properties. Fairly easy. However, I need to draw the text. I know you can get the RichTextBox to draw to a HDC or System.Drawing.Graphics - but as already said, I need it to use PDFSharp. Rendering to bitmaps is not an option, since the PDF must not be huge, and it's a lot of labels. Unfortunately I couldn't get the RichTextBox to tell me the layout of the text - I'm fine with doing the actual rendering by hand, as long as I know where to draw what. This is the first question: How can I get the properly layouted metrics of the rich text out of a RichTextBox? Or is there any way to convert the rich text to a vector graphics format that can be easily drawn manually? I know about NRTFTree which can be used to parse and manipulate RichText. The documentation is bad (actually I don't know, it's Spanish), but I think I can get it to work. As far as I understood, it won't provide layouting as well. Because of this, I think I'll have to write a custom edit control (remember, it's basically just one or two line labels with basic RTF formatting, not a full-fledged edit control - more like editing a textbox in PowerPoint) and write custom text layout logic that used PDFSharp rather than System.Drawing for drawing. Is there any existing, even if partial, solution available, either for the editing or for doing the layout manually (or both)? Or is there an entirely different approach I'm just not seeing? Bonus points if exporting the label texts as RTF into a CSV file, and then importing in Excel retains the formatting. For the editing part, I need it to work in Windows Forms. Other than that it's not Windows-Forms-related, I think.

    Read the article

  • NLP with greatly contrained input and abilities

    - by Mike F
    Hat in hand here. I'm a seasoned developer and I would be grateful for a bit of help. I don't have time to read or digest long intricate discussions on theoretical concepts around NLP (or go get my PHD). That said, I have read a few and it's a damn interesting field. The problem is I need real world solutions, for real world products, in real world time frames. The problem I'm having is right now I'm not sure what the right questions are to ask to get started implementing. I believe this is mostly related to vocabulary. I'll read somewhere, a blog post, a forum post, a whitepaper, and it says, I'm doing flooping with the blargy blarg method, and I go google flooping and blargy blarg, and I get references to more obscurity. It seemingly never ends. So, my question is multiphased. First, more generally, how do I become passingly educated on this quickly? Just in time educated. I only need to know what I need to know to take the next step. I've spent 20 years writing code. Explain quick. I'll get it. (I mean provide a reference to something that explains quickly of course). I'm happy to read the right book, but I don't want to read a book where I read the chapter introduction that explains what floopy floop is and then skip over the rest of the chapter with examples of floopy flooping (because now I get what it is). I also don't want to read a book that goes into too much detail with theoretical underpinnings or history. For example, the Jurafsky book seems like way more than I need: http://www.amazon.com/gp/product/0131873210. But I will read it if this is the right book to read. (It's also dang expensive!) I need the root node of the expedited learning tree here, if you will. Point me in the right direction and I'll be quite grateful. I'm expecting quite a lot of firehose drinking - I just need the right firehose. Second, what I need to do is take a single sentence, with a very reduced vocabulary, and get a grammar tree (sorry if this is the wrong terminology) that I can do something with. I know I could easily write this command line input style in c in a more conventional manner, but I need it to be way better than that. But I don't need a chatterbot either. What I'm doing needs to live in a constrained environment. I can't use Python (unfortunately). I can't ship with gigabytes of corpuses. I need any libraries I use to be in c/c++. If I have to write this myself, I will. Hopefully, it will be achievable considering the reduced problem set. Maybe, probably, that's just naive. If so, let me know. :-) Thanks in advance - Mike

    Read the article

  • What does it mean when ARP shows <incomplete> on eth1

    - by Geoff Dalgas
    We have been using HAProxy along with heartbeat from the Linux-HA project. We are using two linux instances to provide a failover. Each server has with their own public IP and a single IP which is shared between the two using a virtual interface (eth1:1) at IP: 69.59.196.211 The virtual interface (eth1:1) IP 69.59.196.211 is configured as the gateway for the windows servers behind them and we use ip_forwarding to route traffic. We are experiencing an occasional network outage on one of our windows servers behind our linux gateways. HAProxy will detect the server is offline which we can verify by remoting to the failed server and attempting to ping the gateway: Pinging 69.59.196.211 with 32 bytes of data: Reply from 69.59.196.220: Destination host unreachable. Running arp -a on this failed server shows that there is no entry for the gateway address (69.59.196.211): Interface: 69.59.196.220 --- 0xa Internet Address Physical Address Type 69.59.196.161 00-26-88-63-c7-80 dynamic 69.59.196.210 00-15-5d-0a-3e-0e dynamic 69.59.196.212 00-21-5e-4d-45-c9 dynamic 69.59.196.213 00-15-5d-00-b2-0d dynamic 69.59.196.215 00-21-5e-4d-61-1a dynamic 69.59.196.217 00-21-5e-4d-2c-e8 dynamic 69.59.196.219 00-21-5e-4d-38-e5 dynamic 69.59.196.221 00-15-5d-00-b2-0d dynamic 69.59.196.222 00-15-5d-0a-3e-09 dynamic 69.59.196.223 ff-ff-ff-ff-ff-ff static 224.0.0.22 01-00-5e-00-00-16 static 224.0.0.252 01-00-5e-00-00-fc static 225.0.0.1 01-00-5e-00-00-01 static On our linux gateway instances arp -a shows: peak-colo-196-220.peak.org (69.59.196.220) at <incomplete> on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-222.peak.org (69.59.196.222) at 00:15:5d:0a:3e:09 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1 Why would arp occasionally set the entry for this failed server as <incomplete>? Should we be defining our arp entries statically? I've always left arp alone since it works 99% of the time, but in this one instance it appears to be failing. Are there any additional troubleshooting steps we can take help resolve this issue? THINGS WE HAVE TRIED I added a static arp entry for testing on one of the linux gateways which still didn't help. root@haproxy2:~# arp -a peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-221.peak.org (69.59.196.221) at 00:15:5d:00:b2:0d [ether] on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1 peak-colo-196-220.peak.org (69.59.196.220) at 00:21:5e:4d:30:8d [ether] PERM on eth1 root@haproxy2:~# arp -i eth1 -s 69.59.196.220 00:21:5e:4d:30:8d root@haproxy2:~# ping 69.59.196.220 PING 69.59.196.220 (69.59.196.220) 56(84) bytes of data. --- 69.59.196.220 ping statistics --- 7 packets transmitted, 0 received, 100% packet loss, time 6006ms Rebooting the windows web server solves this issue temporarily with no other changes to the network but our experience shows this issue will come back. Swapping network cards and switches I noticed the link light on the port of the switch for the failed windows server was running at 100Mb instead of 1Gb on the failed interface. I moved the cable to several other open ports and the link indicated 100Mb for each port that I tried. I also swapped the cable with the same result. I tried changing the properties of the network card in windows and the server locked up and required a hard reset after clicking apply. This windows server has two physical network interfaces so I have swapped the cables and network settings on the two interfaces to see if the problem follows the interface. If the public interface goes down again we will know that it is not an issue with the network card. (We also tried another switch we have on hand, no change) Changing network hardware driver versions We've had the same problem with the latest Broadcom driver, as well as the built-in driver that ships in Windows Server 2008 R2. Replacing network cables As a last ditch effort we remembered another change that occurred was the replacement of all of the patch cords between our servers / switch. We had purchased two sets, one green of lengths 1ft - 3ft for the private interfaces and another set of red cables for the public interfaces. We swapped out all of the public interface patch cables with a different brand and ran our servers without issue for a full week ... aaaaaand then the problem recurred. Disable checksum offload, remove TProxy We also tried disabling TCP/IP checksum offload in the driver, no change. We're now pulling out TProxy and moving to a more traditional x-forwarded-for network arrangement without any fancy IP address rewriting. We'll see if that helps.

    Read the article

  • MVC 2 Ajax.Beginform passes returned Html to javascript function

    - by Joe
    Hi, I have a small partial Create Person form in a page above a table of results. I want to be able to post the form to the server, which I can do no problem with ajax.Beginform. <% using (Ajax.BeginForm("Create", new AjaxOptions { OnComplete = "ProcessResponse" })) {%> <fieldset> <legend>Fields</legend> <div class="editor-label"> <%=Html.LabelFor(model => model.FirstName)%> </div> <div class="editor-field"> <%=Html.TextBoxFor(model => model.FirstName)%> <%=Html.ValidationMessageFor(model => model.FirstName)%> </div> <div class="editor-label"> <%=Html.LabelFor(model => model.LastName)%> </div> <div class="editor-field"> <%=Html.TextBoxFor(model => model.LastName)%> <%=Html.ValidationMessageFor(model => model.LastName)%> </div> <p> <input type="submit" /> </p> </fieldset> <% } %> Then in my controller I want to be able to post back a partial which is just a table row if the create is successful and append it to the table, which I can do easily with jquery. $('#personTable tr:last').after(data); However, if server validation fails I want to pass back my partial create person form with the validation errors and replace the existing Create Person form. I have tried returning a Json array Controller: return Json(new { Success = true, Html= this.RenderViewToString("PersonSubform",person) }); Javascript: var json_data = response.get_response().get_object(); with a pass/fail flag and the partial rendered as a string using the solition below but that doesnt render the mvc validation controls when the form fails. SO RenderPartialToString So, is there any way I can hand my javascript the out of the box PartialView("PersonForm") as its returned from my ajax.form? Can I pass some addition info as a Json array so I can tell if its pass or fail and maybe add a message? UPDATE I can now pass the HTML of a PartialView to my javascript but I need to pass some additional data pairs like ServerValidation : true/false and ActionMessage : "you have just created a Person Bill". Ideally I would pass a Json array rather than hidden fields in my partial. function ProcessResponse(response) { var html = response.get_data(); $("#campaignSubform").html(html); } Many thanks in advance

    Read the article

  • Enable button based on TextBox value (WPF)

    - by zendar
    This is MVVM application. There is a window and related view model class. There is TextBox, Button and ListBox on form. Button is bound to DelegateCommand that has CanExecute function. Idea is that user enters some data in text box, presses button and data is appended to list box. I would like to enable command (and button) when user enters correct data in TextBox. Things work like this now: CanExecute() method contains code that checks if data in property bound to text box is correct. Text box is bound to property in view model UpdateSourceTrigger is set to PropertyChanged and property in view model is updated after each key user presses. Problem is that CanExecute() does not fire when user enters data in text box. It doesn't fire even when text box lose focus. How could I make this work? Edit: Re Yanko's comment: Delegate command is implemented in MVVM toolkit template and when you create new MVVM project, there is Delegate command in solution. As much as I saw in Prism videos this should be the same class (or at least very similar). Here is XAML snippet: ... <UserControl.Resources> <views:CommandReference x:Key="AddObjectCommandReference" Command="{Binding AddObjectCommand}" /> </UserControl.Resources> ... <TextBox Text="{Binding ObjectName, UpdateSourceTrigger=PropertyChanged}"> </TextBox> <Button Command="{StaticResource AddObjectCommandReference}">Add</Button> ... View model: // Property bound to textbox public string ObjectName { get { return objectName; } set { objectName = value; OnPropertyChanged("ObjectName"); } } // Command bound to button public ICommand AddObjectCommand { get { if (addObjectCommand == null) { addObjectCommand = new DelegateCommand(AddObject, CanAddObject); } return addObjectCommand; } } private void AddObject() { if (ObjectName == null || ObjectName.Length == 0) return; objectNames.AddSourceFile(ObjectName); OnPropertyChanged("ObjectNames"); // refresh listbox } private bool CanAddObject() { return ObjectName != null && ObjectName.Length > 0; } As I wrote in the first part of question, following things work: property setter for ObjectName is triggered on every keypress in textbox if I put return true; in CanAddObject(), command is active (button to) It looks to me that binding is correct. Thing that I don't know is how to make CanExecute() fire in setter of ObjectName property from above code. Re Ben's and Abe's answers: CanExecuteChanged() is event handler and compiler complains: The event 'System.Windows.Input.ICommand.CanExecuteChanged' can only appear on the left hand side of += or -= there are only two more members of ICommand: Execute() and CanExecute() Do you have some example that shows how can I make command call CanExecute(). I found command manager helper class in DelegateCommand.cs and I'll look into it, maybe there is some mechanism that could help. Anyway, idea that in order to activate command based on user input, one needs to "nudge" command object in property setter code looks clumsy. It will introduce dependencies and one of big points of MVVM is reducing them. Is it possible to solve this problem by using dependency properties?

    Read the article

  • Android - Saving an object in onSaveInstanceState?

    - by Donal Rafferty
    I have created a small XML parsing application for Android that displays information in a listview and then allows a user to click on the list view and a dialog with further info will pop up. The problem is that when the screen orientation is changed when a dialog screen is open I get a null pointer error. The null pointer occurs on the following line: if(setting.getAddForPublicUserNames() == 1){ This line is part of my dialogPrepare method: @Override public void onPrepareDialog(int id, Dialog dialog) { switch(id) { case (SETTINGS_DIALOG) : afpunText = ""; if(setting.getAddForPublicUserNames() == 1){ afpunText = "Yes"; } else{ afpunText = "No"; } String Text = "Login Settings: " + "\n" + "Password: " + setting.getPassword() + "\n" + "Server: " + setting.getServerAddress() + "\n" + "Register: " + setting.getRegistrarAddress() + "\n" + "Realm: " + setting.getRealm() + "\n" + "Public UserNames: " + afpunText + "\n" + "Preference Settings: " + "\n" + "Request VDN: " + setting.getRequestVDN() + "\n" + "Handover Settings: " + "\n" + "Enable Handover: " + setting.getEnableHandover() + "\n" + "Hand Over Number: " + setting.getHandoverNum() + "\n"; AlertDialog settingsDialog = (AlertDialog)dialog; settingsDialog.setTitle("Auth ID: " + setting.getUserName()); tv = (TextView)settingsDialog.findViewById(R.id.detailsTextView); if (tv != null) tv.setText(Text); break; } } So the error is that my variable setting is null after the screen orientation changes. I have tried to use the onSaveInstance state methods to fix that as follows: @Override public void onSaveInstanceState(Bundle savedInstanceState) { for(int i = 0; i < settings.size(); i++){ savedInstanceState.putString("Username"+i, settings.get(i).getUserName()); savedInstanceState.putString("Password"+i, settings.get(i).getPassword()); savedInstanceState.putString("Server"+i, settings.get(i).getServerAddress()); savedInstanceState.putString("Registrar"+i, settings.get(i).getRegistrarAddress()); savedInstanceState.putString("Realm"+i, settings.get(i).getRealm()); savedInstanceState.putInt("PUserNames"+i, settings.get(i).getAddForPublicUserNames()); savedInstanceState.putString("RequestVDN"+i, settings.get(i).getRequestVDN()); savedInstanceState.putString("EnableHandOver"+i, settings.get(i).getEnableHandover()); savedInstanceState.putString("HandOverNum"+i, settings.get(i).getHandoverNum()); } super.onSaveInstanceState(savedInstanceState); } and @Override public void onRestoreInstanceState(Bundle savedInstanceState) { super.onRestoreInstanceState(savedInstanceState); //Check to see if this is required // Restore UI state from the savedInstanceState. // This bundle has also been passed to onCreate. for(int i = 0; i<settings.size(); i++){ settings.get(i).setUserName(savedInstanceState.getString("Username"+i)); settings.get(i).setPassword(savedInstanceState.getString("Password"+i)) ; settings.get(i).setServerAddress(savedInstanceState.getString("Server"+i)); settings.get(i).setRegistrarAddress(savedInstanceState.getString("Registrar"+i)); settings.get(i).setRealm(savedInstanceState.getString("Realm"+i)); settings.get(i).setAddForPublicUserNames(savedInstanceState.getInt("PUserNames"+i)); settings.get(i).setRequestVDN(savedInstanceState.getString("RequestVDN"+i)); settings.get(i).setEnableHandover(savedInstanceState.getString("EnableHandOver"+i)); settings.get(i).setHandoverNum(savedInstanceState.getString("HandOverNum"+i)); } } However the error still remains, I think I have to save the selected setting from what was selected from the ListView? But how do I save a setting object in onSavedInstance?

    Read the article

  • localhost/phpmyadmin pulls blank page

    - by Atul Modi
    When I tried configuring local machine as a Internet Gateway with website development capabilities over it and I installed all required software into it. I also had disable the selinux into it. But PROBLEM is when I do http://localhost/phpMyAdmin or all lower case than the page shows it as a blank page. I am pasting code from httpd.conf file into this as well as from phpMyAdmin.conf file I am using Fedora 16 for this. httpd.conf ServerTokens OS ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 5 StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 StartServers 4 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 Listen 80 LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_alias_module modules/mod_authn_alias.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule authn_dbd_module modules/mod_authn_dbd.so LoadModule dbd_module modules/mod_dbd.so LoadModule ldap_module modules/mod_ldap.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule include_module modules/mod_include.so LoadModule log_config_module modules/mod_log_config.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule expires_module modules/mod_expires.so LoadModule deflate_module modules/mod_deflate.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule mime_module modules/mod_mime.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule info_module modules/mod_info.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule substitute_module modules/mod_substitute.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule cache_module modules/mod_cache.so LoadModule suexec_module modules/mod_suexec.so LoadModule disk_cache_module modules/mod_disk_cache.so LoadModule cgi_module modules/mod_cgi.so LoadModule version_module modules/mod_version.so Include conf.d/*.conf User apache Group apache ServerAdmin root@localhost UseCanonicalName Off DocumentRoot "/var/www/html" Options FollowSymLinks AllowOverride None Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all UserDir disabled DirectoryIndex index.html index.htm index.php AccessFileName .htaccess Order allow,deny Deny from all Satisfy All TypesConfig /etc/mime.types DefaultType text/plain MIMEMagicFile conf/magic HostnameLookups Off ErrorLog logs/error_log LogLevel warn LogFormat "%h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %s %b" common LogFormat "%{Referer}i - %U" referer LogFormat "%{User-agent}i" agent CustomLog logs/access_log combined ServerSignature On Alias /icons/ "/var/www/icons/" Options Indexes MultiViews FollowSymLinks AllowOverride None Order allow,deny Allow from all # Location of the WebDAV lock database. DAVLockDB /var/lib/dav/lockdb ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" AllowOverride None Options None Order allow,deny Allow from all IndexOptions FancyIndexing VersionSort NameWidth=* HTMLTable Charset=UTF-8 AddIconByEncoding (CMP,/icons/compressed.gif) x-compress x-gzip AddIconByType (TXT,/icons/text.gif) text/* AddIconByType (IMG,/icons/image2.gif) image/* AddIconByType (SND,/icons/sound2.gif) audio/* AddIconByType (VID,/icons/movie.gif) video/* AddIcon /icons/binary.gif .bin .exe AddIcon /icons/binhex.gif .hqx AddIcon /icons/tar.gif .tar AddIcon /icons/world2.gif .wrl .wrl.gz .vrml .vrm .iv AddIcon /icons/compressed.gif .Z .z .tgz .gz .zip AddIcon /icons/a.gif .ps .ai .eps AddIcon /icons/layout.gif .html .shtml .htm .pdf AddIcon /icons/text.gif .txt AddIcon /icons/c.gif .c AddIcon /icons/p.gif .pl .py AddIcon /icons/f.gif .for AddIcon /icons/dvi.gif .dvi AddIcon /icons/uuencoded.gif .uu AddIcon /icons/script.gif .conf .sh .shar .csh .ksh .tcl AddIcon /icons/tex.gif .tex AddIcon /icons/bomb.gif core AddIcon /icons/back.gif .. AddIcon /icons/hand.right.gif README AddIcon /icons/folder.gif ^^DIRECTORY^^ AddIcon /icons/blank.gif ^^BLANKICON^^ DefaultIcon /icons/unknown.gif ReadmeName README.html HeaderName HEADER.html IndexIgnore .??* *~ # HEADER README* RCS CVS *,v *,t AddLanguage ca .ca AddLanguage cs .cz .cs AddLanguage da .dk AddLanguage de .de AddLanguage el .el AddLanguage en .en AddLanguage eo .eo AddLanguage es .es AddLanguage et .et AddLanguage fr .fr AddLanguage he .he AddLanguage hr .hr AddLanguage it .it AddLanguage ja .ja AddLanguage ko .ko AddLanguage ltz .ltz AddLanguage nl .nl AddLanguage nn .nn AddLanguage no .no AddLanguage pl .po AddLanguage pt .pt AddLanguage pt-BR .pt-br AddLanguage ru .ru AddLanguage sv .sv AddLanguage zh-CN .zh-cn AddLanguage zh-TW .zh-tw LanguagePriority en ca cs da de el eo es et fr he hr it ja ko ltz nl nn no pl pt pt-BR ru sv zh-CN zh-TW ForceLanguagePriority Prefer Fallback AddDefaultCharset UTF-8 AddType application/x-tar .tgz AddType application/x-httpd-php .php AddType application/x-httpd-php .xml AddHandler application/x-httpd-php .xml AddType application/x-compress .Z AddType application/x-gzip .gz .tgz AddType application/x-x509-ca-cert .crt AddType application/x-pkcs7-crl .crl AddHandler type-map var AddType text/html .shtml AddOutputFilter INCLUDES .shtml Alias /error/ "/var/www/error/" AllowOverride None Options IncludesNoExec AddOutputFilter Includes html AddHandler type-map var Order allow,deny Allow from all LanguagePriority en ForceLanguagePriority Prefer Fallback ErrorDocument 400 /error/HTTP_BAD_REQUEST.html.var ErrorDocument 401 /error/HTTP_UNAUTHORIZED.html.var ErrorDocument 403 /error/HTTP_FORBIDDEN.html.var ErrorDocument 404 /error/HTTP_NOT_FOUND.html.var ErrorDocument 405 /error/HTTP_METHOD_NOT_ALLOWED.html.var ErrorDocument 408 /error/HTTP_REQUEST_TIME_OUT.html.var ErrorDocument 500 /error/HTTP_INTERNAL_SERVER_ERROR.html.var ErrorDocument 503 /error/HTTP_SERVICE_UNAVAILABLE.html.var BrowserMatch "Mozilla/2" nokeepalive BrowserMatch "MSIE 4.0b2;" nokeepalive downgrade-1.0 force-response-1.0 BrowserMatch "RealPlayer 4.0" force-response-1.0 BrowserMatch "Java/1.0" force-response-1.0 BrowserMatch "JDK/1.0" force-response-1.0 BrowserMatch "Microsoft Data Access Internet Publishing Provider" redirect-carefully BrowserMatch "MS FrontPage" redirect-carefully BrowserMatch "^WebDrive" redirect-carefully BrowserMatch "^WebDAVFS/1.[0123]" redirect-carefully BrowserMatch "^gnome-vfs/1.0" redirect-carefully BrowserMatch "^XML Spy" redirect-carefully BrowserMatch "^Dreamweaver-WebDAV-SCM1" redirect-carefully Order allow,deny Allow from all # phpMyAdmin.conf Alias /phpMyAdmin /usr/share/phpMyAdmin Alias /phpmyadmin /usr/share/phpMyAdmin Order Allow,Deny Allow from All Allow from 127.0.0.1 Allow from ::1 Order Allow,Deny Allow from All Allow from 127.0.0.1 Allow from ::1 Order Deny,Allow Deny from All Allow from None Order Deny,Allow Deny from All Allow from None Order Deny,Allow Deny from All Allow from None Can anyone help into this area please. Urgent reply will be appreciatable because i am struggling since one and half month for this matter. thank you, Atul

    Read the article

  • Propel Single Table Inheritance Issue

    - by lo_fye
    I have a table called "talk", which is defined as abstract in my schema.xml file. It generates 4 objects (1 per classkey): Comment, Rating, Review, Checkin It also generates TalkPeer, but I couldn't get it to generate the other 4 peers (CommentPeer, RatingPeer, ReviewPeer, CheckinPeer), so I created them by hand, and made them inherit from TalkPeer.php, which inherits from BaseTalkPeer. I then implemented getOMClass() in each of those peers. The problem is that when I do queries using the 4 peers, they return all 4 types of objects. That is, ReviewPeer will return Visits, Ratings, Comments, AND Reviews. Example: $c = new Criteria(); $c->add(RatingPeer::VALUE, 5, Criteria::GREATER_THAN); $positive_ratings = RatingPeer::doSelect($c); This returns all comments, ratings, reviews, & checkins that have a value 5. ReviewPeer should only return Review objects, and can't figure out how to do this. Do I actually have to go through and change all my criteria to manually specify the classkey? That seems a little pointless, since the Peer name already distinct. I don't want to have to customize each Peer. I should be able to customize JUST the TalkPeer, since they all inherit from it... I just can't figure out how. I tried changing doSelectStmt just in TalkPeer so that it automatically adds the CLASSKEY restriction to the Criteria. It almost works, but I get a: Fatal error: Cannot instantiate abstract class Talk in /models/om/BaseTalkPeer.php on line 503. Line 503 is in BaseTalkPeer::populateObjects(), and is the 3rd line below: $cls = TalkPeer::getOMClass($row, 0); $cls = substr('.'.$cls, strrpos('.'.$cls, '.') + 1); $obj = new $cls(); The docs talked about overriding BaseTalkPeer::populateObject(). I have a feeling that's my problem, but even after reading the source code, I still couldn't figure out how to get it to work. Here is what I tried in TalkPeer::doSelectStmt: public static function doSelectStmt(Criteria $criteria, PropelPDO $con = null) { $keys = array('models.Visit'=>1,'models.Comment'=>2,'models.Rating'=>3,'models.Review'=>4); $class_name = self::getOMClass(); if(isset($keys[$class_name])) { //Talk itself is not a returnable type, so we must check $class_key = $keys[$class_name]; $criteria->add(TalkPeer::CLASS_KEY, $class_key); } return parent::doSelectStmt($criteria, $con = null); } Here is an example of my getOMClass method from ReviewPeer: public static function getOMClass() { return self::CLASSNAME_4; //aka 'talk.Review'; } Here is the relevant bit of my schema: <table name="talk" idMethod="native" abstract="true"> <column name="talk_pk" type="INTEGER" required="true" autoIncrement="true" primaryKey="true" /> <column name="class_key" type="INTEGER" required="true" default="" inheritance="single"> <inheritance key="1" class="Visit" extends="models.Talk" /> <inheritance key="2" class="Comment" extends="models.Talk" /> <inheritance key="3" class="Rating" extends="models.Talk" /> <inheritance key="4" class="Review" extends="models.Rating" /> </column> </table> P.S. - No, I can't upgrade from 1.3 to 1.4. There's just too much code that would need to be re-tested

    Read the article

  • Summer Programming Plans

    - by Gabe
    I've wanted to start "hacking" for many months now. But I put it off in favor of school and other things. Now, though, I'm free for the summer and want to learn as much as I can. I have a rough idea of what I want to try my hand at, but need some guidance as to what specifically - and how - I should learn. This is my plan so far: 1) Get good at programming in general. I plan to read up on how to think/work like a programmer. I'm waiting for the Pragmatic Programmer to arrive, which will be the first book I read. Q: What other books/ebooks should I look at? What more can I do here? 2) Learn/Improve at HTML/CSS. My first project will be to make a personal website/blog for myself using HTML and CSS. ----Then I hope to write/design articles like Dustin Curtis. After I finish this (and learn a programming language) I'll try to create user-based a user-focused website. Q: It's my understanding that just trying to design/manage websites is a good way to learn/improve at HTML/CSS. Is that all correct? 3) Try music development. This might be a sort of stretch for stackoverflow, but I'm interested in mixing/making techno songs. (Think Justice, or Daft Punk, or MSTRKRFT.) Q: I have a Mac. Any ideas on how I could start/learn music making? Any programs I should download, for instance? 4) My main goal: Learning a web development language/framework. I'm a year into learning/using C++. But what I really want to do is develop websites and web apps. I've searched online, and there seems to be great debate over which language/framework to learn first (and which is best). I think I've narrowed it down to three: Ruby (Rails), Python (Django), and PHP (?). Q #1: Which should I learn and use first? (Reasons?) Q #2: One reason I was leaning towards PHP is that I'm taking a PHP development course next semester. Learning it now would make that course easy. If PHP was not the answer to Q #1, is it worth learning both? Or, would it be better to just focus on PHP for this summer and next semester, and then transition thereafter to a better language? 5) iPhone/iPad Programming (Maybe). I've a number of simple, useful app ideas that I'd like to eventually get too. I just bought a Mac, as well as a few app development books. Q #1: Am I spreading myself thin trying to learn all of the above, and objective-C? Q #2: How much harder/easier is objective-C compared to the above languages? Also, how easy is it to learn obj-C after learning a web development language (and some C++)? Q #3: Yes or no? Should I go for it, or just keeep with #1-4 for now? Also: If you have any tips on how I should learn (or how you learned to hack), I'm all ears. I'd be especially interested in how you planned out learning: did you just hack whenever you felt like it, or did you "study" the language a few hours a day, or something else? Thanks so much, guys.

    Read the article

  • C++ virtual functions.Problem with vtable

    - by adivasile
    I'm doing a little project in C++ and I've come into some problems regarding virtual functions. I have a base class with some virtual functions: #ifndef COLLISIONSHAPE_H_ #define COLLISIONSHAPE_H_ namespace domino { class CollisionShape : public DominoItem { public: // CONSTRUCTOR //------------------------------------------------- // SETTERS //------------------------------------------------- // GETTERS //------------------------------------------------- virtual void GetRadius() = 0; virtual void GetPosition() = 0; virtual void GetGrowth(CollisionShape* other) = 0; virtual void GetSceneNode(); // OTHER //------------------------------------------------- virtual bool overlaps(CollisionShape* shape) = 0; }; } #endif /* COLLISIONSHAPE_H_ */ and a SphereShape class which extends CollisionShape and implements the methods above /* SphereShape.h */ #ifndef SPHERESHAPE_H_ #define SPHERESHAPE_H_ #include "CollisionShape.h" namespace domino { class SphereShape : public CollisionShape { public: // CONSTRUCTOR //------------------------------------------------- SphereShape(); SphereShape(CollisionShape* shape1, CollisionShape* shape2); // DESTRUCTOR //------------------------------------------------- ~SphereShape(); // SETTERS //------------------------------------------------- void SetPosition(); void SetRadius(); // GETTERS //------------------------------------------------- cl_float GetRadius(); cl_float3 GetPosition(); SceneNode* GetSceneNode(); cl_float GetGrowth(CollisionShape* other); // OTHER //------------------------------------------------- bool overlaps(CollisionShape* shape); }; } #endif /* SPHERESHAPE_H_ */ and the .cpp file: /*SphereShape.cpp*/ #include "SphereShape.h" #define max(a,b) (a>b?a:b) namespace domino { // CONSTRUCTOR //------------------------------------------------- SphereShape::SphereShape(CollisionShape* shape1, CollisionShape* shape2) { } // DESTRUCTOR //------------------------------------------------- SphereShape::~SphereShape() { } // SETTERS //------------------------------------------------- void SphereShape::SetPosition() { } void SphereShape::SetRadius() { } // GETTERS //------------------------------------------------- void SphereShape::GetRadius() { } void SphereShape::GetPosition() { } void SphereShape::GetSceneNode() { } void SphereShape::GetGrowth(CollisionShape* other) { } // OTHER //------------------------------------------------- bool SphereShape::overlaps(CollisionShape* shape) { return true; } } These classes, along some other get compiled into a shared library. Building libdomino.so g++ -m32 -lpthread -ldl -L/usr/X11R6/lib -lglut -lGLU -lGL -shared -lSDKUtil -lglut -lGLEW -lOpenCL -L/home/adrian/AMD-APP-SDK-v2.4-lnx32/lib/x86 -L/home/adrian/AMD-APP-SDK-v2.4-lnx32/TempSDKUtil/lib/x86 -L"/home/adrian/AMD-APP-SDK-v2.4-lnx32/lib/x86" -lSDKUtil -lglut -lGLEW -lOpenCL -o build/debug/x86/libdomino.so build/debug/x86//Material.o build/debug/x86//Body.o build/debug/x86//SphereShape.o build/debug/x86//World.o build/debug/x86//Engine.o build/debug/x86//BVHNode.o When I compile the code that uses this library I get the following error: ../../../lib/x86//libdomino.so: undefined reference to `vtable for domino::CollisionShape' ../../../lib/x86//libdomino.so: undefined reference to `typeinfo for domino::CollisionShape' Command used to compile the demo that uses the library: g++ -o build/debug/x86/startdemo build/debug/x86//CMesh.o build/debug/x86//CSceneNode.o build/debug/x86//OFF.o build/debug/x86//Light.o build/debug/x86//main.o build/debug/x86//Camera.o -m32 -lpthread -ldl -L/usr/X11R6/lib -lglut -lGLU -lGL -lSDKUtil -lglut -lGLEW -ldomino -lSDKUtil -lOpenCL -L/home/adrian/AMD-APP-SDK-v2.4-lnx32/lib/x86 -L/home/adrian/AMD-APP-SDK-v2.4-lnx32/TempSDKUtil/lib/x86 -L../../../lib/x86/ -L"/home/adrian/AMD-APP-SDK-v2.4-lnx32/lib/x86" (the -ldomino flag) And when I run the demo, I manually tell it about the library: LD_LIBRARY_PATH=../../lib/x86/:$AMDAPPSDKROOT/lib/x86:$LD_LIBRARY_PATH bin/x86/startdemo After reading a bit about virtual functions and virtual tables I understood that virtual tables are handled by the compiler and I shouldn't worry about it, so I'm a little bit confused on how to handle this issue. I'm using gcc version 4.6.0 20110530 (Red Hat 4.6.0-9) (GCC) Later edit: I'm really sorry, but I wrote the code by hand directly here. I have defined the return types in the code. I apologize to the 2 people that answered below. I have to mention that I am a beginner at using more complex project layouts in C++.By this I mean more complex makefiles, shared libraries, stuff like that.

    Read the article

  • Is this question too hard for a seasoned C++ architect?

    - by Monomer
    Background Information We're looking to hire a seasoned C++ architect (10+years dev, of which at least 6years must be C++ ) for a high frequency trading platform. Job advert says STL, Boost proficiency is a must with preferences to modern uses of C++. The company I work for is a Fortune 500 IB (aka finance industry), it requires passes in all the standard SHL tests (numeric, vocab, spatial etc) before interviews can commence. Everyone on the team was given the task of coming up with one question to ask the candidates during a written/typed test, please note this is the second test provided to the candidates, the first being Advanced IKM C++ test, done in the offices supervised and without internet access. People passing that do the second test. After roughly 70 candidates, my question has been determined to be statistically the worst performing - aka least number of people attempted it, furthermore even less people were able to give meaningful answers. Please note, the second test is not timed, the candidate can literally take as long as they like (we've had one person take roughly 10.5hrs) My question to SO is this, after SHL and IKM adv c++ tests, backed up with at least 6+ years C++ development experience, is it still ok not to be able to even comment about let alone come up with some loose strategy for solving the following question. The Question There is a class C with methods foo, boo, boo_and_foo and foo_and_boo. Each method takes i,j,k and l clock cycles respectively, where i < j, k < i+j and l < i+j. class C { public: int foo() {...} int boo() {...} int boo_and_foo() {...} int foo_and_boo() {...} }; In code one might write: C c; . . int i = c.foo() + c.boo(); But it would be better to have: int i = c.foo_and_boo(); What changes or techniques could one make to the definition of C, that would allow similar syntax of the original usage, but instead have the compiler generate the latter. Note that foo and boo are not commutative. Possible Solution We were basically looking for an expression templates based approach, and were willing to give marks to anyone who had even hinted or used the phrase or related terminology. We got only two people that used the wording, but weren't able to properly describe how they accomplish the task in detail. We use such techniques all over the place, due to the use of various mathematical operators for matrix and vector based calculations, for example to decide when to use IPP or hand woven implementations at compile time for a particular architecture and many other things. The particular area of software development requires microsecond response times. I believe could/should be able to teach a junior such techniques, but given the assumed caliber of candidates I expected a little more. Is this really a difficult question? Should it be removed? Or are we just not seeing the right candidates?

    Read the article

  • TSQL Shred XML - Working with namespaces (newbie @ shredding XML)

    - by drachenstern
    Here's a link to my previous question on this same block of code with a working shred example Ok, I'm a C# ASP.NET dev following orders: The orders are to take a given dataset, shred the XML and return columns. I've argued that it's easier to do the shredding on the ASP.NET side where we already have access to things like deserializers, etc, and the entire complex of known types, but no, the boss says "shred it on the server, return a dataset, bind the dataset to the columns of the gridview" so for now, I'm doing what I was told. This is all to head off the folks who will come along and say "bad requirements". Task at hand: Current code that doesn't work: And if we modify the previous post to include namespaces on the XML elements, we lose the functionality that the previous post has... DECLARE @table1 AS TABLE ( ProductID VARCHAR(10) , Name VARCHAR(20) , Color VARCHAR(20) , UserEntered VARCHAR(20) , XmlField XML ) INSERT INTO @table1 SELECT '12345','ball','red','john','<sizes xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><size xmlns="http://example.com/ns" name="medium"><price>10</price></size><size xmlns="http://example.com/ns" name="large"><price>20</price></size></sizes>' INSERT INTO @table1 SELECT '12346','ball','blue','adam','<sizes xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><size xmlns="http://example.com/ns" name="medium"><price>12</price></size><size xmlns="http://example.com/ns" name="large"><price>25</price></size></sizes>' INSERT INTO @table1 SELECT '12347','ring','red','john','<sizes xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><size xmlns="http://example.com/ns" name="medium"><price>5</price></size><size xmlns="http://example.com/ns" name="large"><price>8</price></size></sizes>' INSERT INTO @table1 SELECT '12348','ring','blue','adam','<sizes xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><size xmlns="http://example.com/ns" name="medium"><price>8</price></size><size xmlns="http://example.com/ns" name="large"><price>10</price></size></sizes>' INSERT INTO @table1 SELECT '23456','auto','black','ann','<auto xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><type xmlns="http://example.com/ns">car</type><wheels xmlns="http://example.com/ns">4</wheels><doors xmlns="http://example.com/ns">4</doors><cylinders xmlns="http://example.com/ns">3</cylinders></auto>' INSERT INTO @table1 SELECT '23457','auto','black','ann','<auto xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><type xmlns="http://example.com/ns">truck</type><wheels xmlns="http://example.com/ns">4</wheels><doors xmlns="http://example.com/ns">2</doors><cylinders xmlns="http://example.com/ns">8</cylinders></auto><auto xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><type xmlns="http://example.com/ns">car</type><wheels xmlns="http://example.com/ns">4</wheels><doors xmlns="http://example.com/ns">4</doors><cylinders xmlns="http://example.com/ns">6</cylinders></auto>' DECLARE @x XML -- I think I'm supposed to use WITH XMLNAMESPACES(...) here but I don't know how SELECT @x = ( SELECT ProductID , Name , Color , UserEntered , XmlField.query(' for $vehicle in //auto return <auto type = "{$vehicle/type}" wheels = "{$vehicle/wheels}" doors = "{$vehicle/doors}" cylinders = "{$vehicle/cylinders}" />') FROM @table1 table1 WHERE Name = 'auto' FOR XML AUTO ) SELECT @x SELECT ProductID = T.Item.value('../@ProductID', 'varchar(10)') , Name = T.Item.value('../@Name', 'varchar(20)') , Color = T.Item.value('../@Color', 'varchar(20)') , UserEntered = T.Item.value('../@UserEntered', 'varchar(20)') , VType = T.Item.value('@type' , 'varchar(10)') , Wheels = T.Item.value('@wheels', 'varchar(2)') , Doors = T.Item.value('@doors', 'varchar(2)') , Cylinders = T.Item.value('@cylinders', 'varchar(2)') FROM @x.nodes('//table1/auto') AS T(Item) If my previous post shows there's a much better way to do this, then I really need to revise this question as well, but on the off chance this coding-style is good, I can probably go ahead with this as-is... Any takers?

    Read the article

  • Strategy and AI for the game 'Proximity'

    - by smci
    'Proximity' is a strategy game of territorial domination similar to Othello, Go and Risk. Two players, uses a 10x12 hex grid. Game invented by Brian Cable in 2007. Seems to be a worthy game for discussing a) optimal strategy then b) how to build an AI Strategies are going to be probabilistic or heuristic-based, due to the randomness factor, and the high branching factor (starts out at 120). So it will be kind of hard to compare objectively. A compute time limit of 5s per turn seems reasonable. Game: Flash version here and many copies elsewhere on the web Rules: here Object: to have control of the most armies after all tiles have been placed. Each turn you received a randomly numbered tile (value between 1 and 20 armies) to place on any vacant board space. If this tile is adjacent to any ally tiles, it will strengthen each tile's defenses +1 (up to a max value of 20). If it is adjacent to any enemy tiles, it will take control over them if its number is higher than the number on the enemy tile. Thoughts on strategy: Here are some initial thoughts; setting the computer AI to Expert will probably teach a lot: minimizing your perimeter seems to be a good strategy, to prevent flips and minimize worst-case damage like in Go, leaving holes inside your formation is lethal, only more so with the hex grid because you can lose armies on up to 6 squares in one move low-numbered tiles are a liability, so place them away from your main territory, near the board edges and scattered. You can also use low-numbered tiles to plug holes in your formation, or make small gains along the perimeter which the opponent will not tend to bother attacking. a triangle formation of three pieces is strong since they mutually reinforce, and also reduce the perimeter Each tile can be flipped at most 6 times, i.e. when its neighbor tiles are occupied. Control of a formation can flow back and forth. Sometimes you lose part of a formation and plug any holes to render that part of the board 'dead' and lock in your territory/ prevent further losses. Low-numbered tiles are obvious-but-low-valued liabilities, but high-numbered tiles can be bigger liabilities if they get flipped (which is harder). One lucky play with a 20-army tile can cause a swing of 200 (from +100 to -100 armies). So tile placement will have both offensive and defensive considerations. Comment 1,2,4 seem to resemble a minimax strategy where we minimize the maximum expected possible loss (modified by some probabilistic consideration of the value ß the opponent can get from 1..20 i.e. a structure which can only be flipped by a ß=20 tile is 'nearly impregnable'.) I'm not clear what the implications of comments 3,5,6 are for optimal strategy. Interested in comments from Go, Chess or Othello players. (The sequel ProximityHD for XBox Live, allows 4-player -cooperative or -competitive local multiplayer increases the branching factor since you now have 5 tiles in your hand at any given time, of which you can only play one. Reinforcement of ally tiles is increased to +2 per ally.)

    Read the article

  • Is the Cloud ready for an Enterprise Java web application? Seeking a JEE hosting advice.

    - by Jakub Holý
    Greetings to all the smart people around here! I'd like to ask whether it is feasible or a good idea at all to deploy a Java enterprise web application to a Cloud such as Amazon EC2. More exactly, I'm looking for infrastructure options for an application that shall handle few hundred users with long but neither CPU nor memory intensive sessions. I'm considering dedicated servers, virtual private servers (VPSs) and EC2. I've noticed that there is a project called JBoss Cloud so people are working on enabling such a deployment, on the other hand it doesn't seem to be mature yet and I'm not sure that the cloud is ready for this kind of applications, which differs from the typical cloud-based applications like Twitter. Would you recommend to deploy it to the cloud? What are the pros and cons? The application is a Java EE 5 web application whose main function is to enable users to compose their own customized Product by combining the available Parts. It uses stateless and stateful session beans and JPA for persistence of entities to a RDBMS and fetches information about Parts from the company's inventory system via a web service. Aside of external users it's used also by few internal ones, who are authenticated against the company's LDAP. The application should handle around 300-400 concurrent users building their product and should be reasonably scalable and available though these qualities are only of a medium importance at this stage. I've proposed an architecture consisting of a firewall (FW) and load balancer supporting sticky sessions and https (in the Cloud this would be replaced with EC2's Elastic Load Balancing service and FW on the app. servers, in a physical architecture the load-balancer would be a HW), then two physical clustered application servers combined with web servers (so that if one fails, a user doesn't loose his/her long built product) and finally a database server. The DB server would need a slave backup instance that can replace the master instance if it fails. This should provide reasonable availability and fault tolerance and provide good scalability as long as a single RDBMS can keep with the load, which should be OK for quite a while because most of the operations are done in the memory using a stateful bean and only occasionally stored or retrieved from the DB and the amount of data is low too. A problematic part could be the dependency on the remote inventory system webservice but with good caching of its outputs in the application it should be OK too. Unfortunately I've only vague idea of the system resources (memory size, number and speed of CPUs/cores) that such an "average Java EE application" for few hundred users needs. My rough and mostly unfounded estimate based on actual Amazon offerings is that 1.7GB and a single, 2-core "modern CPU" with speed around 2.5GHz (the High-CPU Medium Instance) should be sufficient for any of the two application servers (since we can handle higher load by provisioning more of them). Alternatively I would consider using the Large instance (64b, 7.5GB RAM, 2 cores at 1GHz) So my question is whether such a deployment to the cloud is technically and financially feasible or whether dedicated/VPS servers would be a better option and whether there are some real-world experiences with something similar. Thank you very much! /Jakub Holy PS: I've found the JBoss EAP in a Cloud Case Study that shows that it is possible to deploy a real-world Java EE application to the EC2 cloud but unfortunately there're no details regarding topology, instance types, or anything :-(

    Read the article

  • Most simple way to do holiday calculation?

    - by brainfrog
    I want to make a little free calendar program to help me and others calculate how much time we have got left in a project. I mean real working time, not just time. Time in a raw form is not saying much. Typically when my boss tells me that I have time until 05-05-2011 it doesn't tell me really how much time I have to do my job. You know...so many things stop me from work: A) beeing at home, not at work (so called "free time" or "spare time"). That is in my case I work exactly 8 hours a day and then the cleaning ladies throw me out of the office with their incredible loud industrial vacuum cleaners every evening (my boss accepts that as an excuse to go home in time, regularly). B) weekends, or more precisely saturdays and sundays C) official holiday rescuing me from having to go to work. what I want to do is make a little utility which tells me how many working hours I really have in a given time period. The first two things A and B are pretty easy to implement. But the last thing C scares my pants off. Holidays. OOOHHH man. You know what that means. Chaos. Pure chaos. The huge question is: HOW TO CALCULATE HOLIDAYS?! Since I want my program to be useful for anyone anywhere in the world, I can't just hardcode all holidays for my little town. So which options do I have? I) I could hand-craft downloadable lists of holidays. Users search them within the application and download them from an webserver. Or I ship all of them in the package. But I would get very, very old if I tried that by myself for every country, state and town. II) I make an initial data sheet with holidays for my town, and don't care about the rest. However, I make that sheet with an how-to public, so that everyone who feels like beeing very nice can provide holiday data for his country / region / whatever. Those are made public on a webserver and everyone can get the data packages he/she needs for the app. III) ? I care a lot about usability. I don't want to make an ugly linux hack style hard to use app that only computer freaks can use. So you need to tell me more about holiday science. I was never really clever at this. I assume every single country in the world has it's own set of holidays. In every country there may be several states. For example the US has some, and Germany has also some states. Holidays vary from state to state. But I know from an good programmer he told me never assume anything. So the questions about holiday science are: Which categories do I need to make holiday-data-packs searchable? A guy from India should find quickly his holiday data pack, and a guy from Sillicon Valley should find his pack as equally fast. It makes most sense to me to filter for COUNTRY STATE WHATEVER. Like a drill-down-search. Did I miss something? What would be the best data format to hold holiday information? A holiday has a start and end date and a name. That should be enough. Would I put all this stuff in thousands of XML files? How would you go about this? Any hint / help is highly welcome! Thanks to everyone!

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >