Search Results

Search found 24347 results on 974 pages for 'cross process'.

Page 174/974 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • Remote script execution on Windows 2003 server - alternatives to PSEXEC

    - by chickeninabiscuit
    We are wanting to deploy our application to our Test server from our Hudson server. I'd like to be able to have hudson copy the application files and start a script that would run locally on our Test server. We can't use psexec because of a cross domain policy. Currently we are doing this manually, by RDPing to the Test server and checking out the code from subversion manually. Are there alternatives to PSExec that can bypass the cross domain policy problem?

    Read the article

  • "Test to measure your ability to follow directions and solve complex problems in a neat and orderly manner." [closed]

    - by Matt
    Use the table of symbol substitutions when answering the problem below: Circle = 0 Dot = 1 Line = 2 Triangle = 3 Square = 4 Pentagon = 5 Hexagon = 6 Cross = 7 Heart = 8 Smiley Face = 9 Use the following special rule when answering the problem below: If ever a square is next to a cross during long multiplication, the square shall be treated as a triangle. Problem: Show your work in doing long multiplication of Pentagon Pentagon Nonagon by Line Square Octagon. Show your work using symbols not numbers.

    Read the article

  • GWT layout panels vs. CSS layout

    - by David
    I read an article entitled "Tags First GWT", in which the writer suggests using GWT for event-handling, and CSS for layout. I just don't know whether the benefit of GWT's cross-browser compatibility goodness outweighs the flexibility offered by pure CSS layout. GWT GWT 2.0 has some snazzy layout panels, but to get them to resize properly you really need to build the entire panel containment tree from the root panel down. It's an all-or-nothing thing, it seems. CSS You can use CSS to layout an application too, and I'm inclined to do just that, if only to justify my purchase of several books touting the 'semantic markup' gospel. The downside might be cross-browser incompatibilities, the prevalence of which I have yet to determine. Which way to go? What is your opinion? Are cross-browser problems bad enough, and prevalent enough, to warrant ditching my CSS books, and building with GWT layout panels?

    Read the article

  • Where to place ClientAccessPolicy.xml for Local WCF Service?

    - by cam
    I'm trying to create a basic WCF Service and Silverlight client. I've followed the following tutorial: http://channel9.msdn.com/shows/Endpoint/Endpoint-Screencasts-Creating-Your-First-WCF-Client/ Since Silverlight 4 was incompatible with the WSHttpBinding, I changed it to BasicHttpBinding. Unfortunately I keep getting this error now: "An error occurred while trying to make a request to URI'**'.This could be due to attempting to access a service in a cross-domain way without a proper cross-domain policy in place, or a policy that is unsuitable for SOAP services. You may need to contact the owner of the service to publish a cross-domain policy file and to ensure it allows SOAP-related HTTP headers to be sent." I placed clientaccesspolicy.xml in the root directory of the WCF project (which is in the same solution as the Silverlight client). This did not solve the problem. What do I need to do?

    Read the article

  • Any hosted versions of jQuery that have the 'Access-Control-Allow-Origin: *' header set?

    - by Greg Bray
    I have been working with jQuery recently and ran into a problem where I couldn't include it in a userscript because xmlhttpRequest uses the same origin policy. After further testing I found that most browsers also support the Cross-Origin Resource Sharing access control defined by W3C as a workaround for issues with same origin policy. I tested this by hosting the jQuery script on a local web server that included the Access-Control-Allow-Origin: * http header, which allowed the script to be downloaded using xmlhttpRequest so that it could be included in my userscript. I'd like to use a hosted version of jQuery when releasing the script, but so far testing with tools like http://www.seoconsultants.com/tools/headers I have not found any sites that allow cross-origin access to the jQuery script. Here is the list I have tested so far: http://www.asp.net/ajaxlibrary/CDN.ashx http://code.google.com/apis/ajaxlibs/documentation/index.html#jquery http://docs.jquery.com/Downloading_jQuery#CDN_Hosted_jQuery Are there any other hosted versions of jQuery that do allow cross origin access?

    Read the article

  • Using IDataErrorInfo and setting Validation.HasError style

    - by Gaurav
    In WPF using IDataErrorInfo and Style I want to create form where I can provide end user three different status while validating data To make the scenario more clear 1) I have textbox next to it I have icon which provides end user what kind of input textbox expects - Initial status with information icon 2) As soon as user enter data it validates it and decides whether it is valid or not - most of the time it will show cross (X) icon saying invalid data 3) As it is validating on UpdateSourceTrigger="PropertyChanged" it will turn cross icon to green check mark as soon as it gets validated i.e [ ] i (tooltip- Any valid user name ) [Ga ] X (tooltip- Invalid user name. Must be 5 char long) [Gaurav ] * (it will show only correct icon, meaning valid value) How can I achieve this using IDataErrorInfo and Style, I tried doing that but as soon as my form gets loaded it invalidates all the data and shows cross icon at the first time. I want to show different tooltip and different icon for three states (Initial info, Invalid data, Valid data)

    Read the article

  • Recording/Reading C doubles in the IEEE 754 interchange format

    - by rampion
    So I'm serializing a C data structure for cross-platform use, and I want to make sure I'm recording my floating point numbers in a cross-platform manner. I had been planning on just doing char * pos; /*...*/ *((double*) pos) = dataStructureInstance->fieldWithOfTypeDouble; pos += sizeof(double); But I wasn't sure that the bytes would be recorded in the char * array in the IEEE 754 interchange format. I've been bitten by cross-platform issues before (endian-ness and whatnot). Is there anything I need to do to a double to get the bytes in interchange format?

    Read the article

  • YesNo MessageBox not closing when x-button clicked

    - by Simpzon
    When I open a MessageBox with options YesNo, the (usually) cancelling cross in the upper right is shown but has no effect. System.Windows.MessageBox.Show("Really, really?", "Are you sure?", MessageBoxButton.YesNo); If I offer YesNoCancel as options, clicking the cross closes the Dialog with DialogResult Cancel. System.Windows.MessageBox.Show("Really, really?", "Are you sure?", MessageBoxButton.YesNoCancel); I would have expected that the cross is "looking disabled" if not hidden at all, when clicking it has no effect. Probably I am not the first one observing this. What is your favorite way to hide/disable this button or workaround the issue? Note: I would prefer a solution that does not use System.Windows.Forms, since I am dealing with WPF projects and would like to avoid any InterOp if possible.

    Read the article

  • [java] Efficiency of while(true) ServerSocket Listen

    - by Submerged
    I am wondering if a typical while(true) ServerSocket listen loop takes an entire core to wait and accept a client connection (Even when implementing runnable and using Thread .start()) I am implementing a type of distributed computing cluster and each computer needs every core it has for computation. A Master node needs to communicate with these computers (invoking static methods that modify the algorithm's functioning). The reason I need to use sockets is due to the cross platform / cross language capabilities. In some cases, PHP will be invoking these java static methods. I used a java profiler (YourKit) and I can see my running ServerSocket listen thread and it never sleeps and it's always running. Is there a better approach to do what I want? Or, will the performance hit be negligible? Please, feel free to offer any suggestion if you can think of a better way (I've tried RMI, but it isn't supported cross-language. Thanks everyone

    Read the article

  • Adobe Socket Policy File Server Problems

    - by Matt
    Has anyone been able to successfully implement a service to serve the required socket policy file to FlashPlayer? I am running the Python implementation of the service provided by Adobe at http://www.adobe.com/devnet/flashplayer/articles/socket_policy_files.html and using the following policy file: <?xml version="1.0" encoding="UTF-8"?> <cross-domain-policy> <site-control permitted-cross-domain-policies="master-only"/> <allow-access-from domain="*" to-ports="*" secure="false"/> </cross-domain-policy> and receiving this message from Flash: [SecurityErrorEvent type="securityError" bubbles=false cancelable=false eventPhase=2 text="Error #2048: Security sandbox violation: http://www.mapopolis.com/family/Tree.swf cannot load data from www.mapopolis.com:1900."] Thanks.

    Read the article

  • add new column in birt report based on generated grand total column in crosstab

    - by Sanga
    Hi there, Thanks for reading my question. Please I need your help here. I am trying to add a column that is based on a cross-tab's grand total. The cross-tab was added by clicking on the totals option for the column. Now I want to use this total for another calculation in my cross-tab but its so difficult adding a column after the grand total column. Secondly its difficult getting a reference to the results in the grand total column. Please what do you advice? Thanks Ime

    Read the article

  • Efficiency of while(true) ServerSocket Listen

    - by Submerged
    I am wondering if a typical while(true) ServerSocket listen loop takes an entire core to wait and accept a client connection (Even when implementing runnable and using Thread .start()) I am implementing a type of distributed computing cluster and each computer needs every core it has for computation. A Master node needs to communicate with these computers (invoking static methods that modify the algorithm's functioning). The reason I need to use sockets is due to the cross platform / cross language capabilities. In some cases, PHP will be invoking these java static methods. I used a java profiler (YourKit) and I can see my running ServerSocket listen thread and it never sleeps and it's always running. Is there a better approach to do what I want? Or, will the performance hit be negligible? Please, feel free to offer any suggestion if you can think of a better way (I've tried RMI, but it isn't supported cross-language. Thanks everyone

    Read the article

  • WPF MessageBox close without any action

    - by developer
    Hi, I have a confirmation message box for the user in one of my apps. Below is the code for that, MessageBoxResult res= System.Windows.MessageBox.Show("Could not find the folder, so the D: Drive will be opened instead."); if (res == MessageBoxResult.OK) { MessageBox.Show("OK"); } else { MessageBox.Show("Do Nothing"); } Now, when the user clicks on the OK button, I want certain code to execute but when they click on the red cross at the upper right corner, I just want the messagebox to close without doing anything. In my case I get 'OK' displayed even when I click on the red cross icon at the upper right corner. Is there a way I can have 'Do Nothing' displayed when I click on the cross. I not want to add any more buttons.

    Read the article

  • Down Tools Week Cometh: Kissing Goodbye to CVs/Resumes and Cover Letters

    - by Bart Read
    I haven't blogged about what I'm doing in my (not so new) temporary role as Red Gate's technical recruiter, mostly because it's been routine, business as usual stuff, and because I've been trying to understand the role by doing it. I think now though the time has come to get a little more radical, so I'm going to tell you why I want to largely eliminate CVs/resumes and cover letters from the application process for some of our technical roles, and why I think that might be a good thing for candidates (and for us). I have a terrible confession to make, or at least it's a terrible confession for a recruiter: I don't really like CV sifting, or reading cover letters, and, unless I've misread the mood around here, neither does anybody else. It's dull, it's time-consuming, and it's somewhat soul destroying because, when all is said and done, you're being paid to be incredibly judgemental about people based on relatively little information. I feel like I've dirtied myself by saying that - I mean, after all, it's a core part of my job - but it sucks, it really does. (And, of course, the truth is I'm still a software engineer at heart, and I'm always looking for ways to do things better.) On the flip side, I've never met anyone who likes writing their CV. It takes hours and hours of faffing around and massaging it into shape, and the whole process is beset by a gnawing anxiety, frustration, and insecurity. All you really want is a chance to demonstrate your skills - not just talk about them - and how do you do that in a CV or cover letter? Often the best candidates will include samples of their work (a portfolio, screenshots, links to websites, product downloads, etc.), but sometimes this isn't possible, or may not be appropriate, or you just don't think you're allowed because of what your school/university careers service has told you (more commonly an issue with grads, obviously). And what are we actually trying to find out about people with all of this? I think the common criteria are actually pretty basic: Smart Gets things done (thanks for these two Joel) Not an a55hole* (sorry, have to get around Simple Talk's swear filter - and thanks to Professor Robert I. Sutton for this one) *Of course, everyone has off days, and I don't honestly think we're too worried about somebody being a bit grumpy every now and again. We can do a bit better than this in the context of the roles I'm talking about: we can be more specific about what "gets things done" means, at least in part. For software engineers and interns, the non-exhaustive meaning of "gets things done" is: Excellent coder For test engineers, the non-exhaustive meaning of "gets things done" is: Good at finding problems in software Competent coder Team player, etc., to me, are covered by "not an a55hole". I don't expect people to be the life and soul of the party, or a wild extrovert - that's not what team player means, and it's not what "not an a55hole" means. Some of our best technical staff are quiet, introverted types, but they're still pleasant to work with. My problem is that I don't think the initial sift really helps us find out whether people are smart and get things done with any great efficacy. It's better than nothing, for sure, but it's not as good as it could be. It's also contentious, and potentially unfair/inequitable - if you want to get an idea of what I mean by this, check out the background information section at the bottom. Before I go any further, let's look at the Red Gate recruitment process for technical staff* as it stands now: (LOTS of) People apply for jobs. All these applications go through a brutal process of manual sifting, which eliminates between 75 and 90% of them, depending upon the role, and the time of year**. Depending upon the role, those who pass the sift will be sent an assessment or telescreened. For the purposes of this blog post I'm only interested in those that are sent some sort of programming assessment, or bug hunt. This means software engineers, test engineers, and software interns, which are the roles for which I receive the most applications. The telescreen tends to be reserved for project or product managers. Those that pass the assessment are invited in for first interview. This interview is mostly about assessing their technical skills***, although we're obviously on the look out for cultural fit red flags as well. If the first interview goes well we'll invite candidates back for a second interview. This is where team/cultural fit is really scoped out. We also use this interview to dive more deeply into certain areas of their skillset, and explore any concerns that may have come out of the first interview (these obviously won't have been serious or obvious enough to cause a rejection at that point, but are things we do need to look into before we'd consider making an offer). We might subsequently invite them in for lunch before we make them an offer. This tends to happen when we're recruiting somebody for a specific team and we'd like them to meet all the people they'll be working with directly. It's not an interview per se, but can prove pivotal if they don't gel with the team. Anyone who's made it this far will receive an offer from us. *We have a slightly quirky definition of "technical staff" as it relates to the technical recruiter role here. It includes software engineers, test engineers, software interns, user experience specialists, technical authors, project managers, product managers, and development managers, but does not include product support or information systems roles. **For example, the quality of graduate applicants overall noticeably drops as the academic year wears on, which is not to say that by now there aren't still stars in there, just that they're fewer and further between. ***Some organisations prefer to assess for team fit first, but I think assessing technical skills is a more effective initial filter - if they're the nicest person in the world, but can't cut a line of code they're not going to work out. Now, as I suggested in the title, Red Gate's Down Tools Week is upon us once again - next week in fact - and I had proposed as a project that we refactor and automate the first stage of marking our programming assessments. Marking assessments, and in fact organising the marking of them, is a somewhat time-consuming process, and we receive many assessment solutions that just don't make the cut, for whatever reason. Whilst I don't think it's possible to fully automate marking, I do think it ought to be possible to run a suite of automated tests over each candidate's solution to see whether or not it behaves correctly and, if it does, move on to a manual stage where we examine the code for structure, decomposition, style, readability, maintainability, etc. Obviously it's possible to use tools to generate potentially helpful metrics for some of these indices as well. This would obviously reduce the marking workload, and would provide candidates with quicker feedback about whether they've been successful - though I do wonder if waiting a tactful interval before sending a (nicely written) rejection might be wise. I duly scrawled out a picture of my ideal process, which looked like this: The problem is, as soon as I'd roughed it out, I realised that fundamentally it wasn't an ideal process at all, which explained the gnawing feeling of cognitive dissonance I'd been wrestling with all week, whilst I'd been trying to find time to do this. Here's what I mean. Automated assessment marking, and the associated infrastructure around that, makes it much easier for us to deal with large numbers of assessments. This means we can be much more permissive about who we send assessments out to or, in other words, we can give more candidates the opportunity to really demonstrate their skills to us. And this leads to a question: why not give everyone the opportunity to demonstrate their skills, to show that they're smart and can get things done? (Two or three of us even discussed this in the down tools week hustings earlier this week.) And isn't this a lot simpler than the alternative we'd been considering? (FYI, this was automated CV/cover letter sifting by some form of textual analysis to ideally eliminate the worst 50% or so of applications based on an analysis of the 20,000 or so historical applications we've received since 2007 - definitely not the basic keyword analysis beloved of recruitment agencies, since this would eliminate hardly anyone who was awful, but definitely would eliminate stellar Oxbridge candidates - #fail - or some nightmarishly complex Google-like system where we profile all our currently employees, only to realise that we're never going to get representative results because we don't have a statistically significant sample size in any given role - also #fail.) No, I think the new way is better. We let people self-select. We make them the masters (or mistresses) of their own destiny. We give applicants the power - we put their fate in their hands - by giving them the chance to demonstrate their skills, which is what they really want anyway, instead of requiring that they spend hours and hours creating a CV and cover letter that I'm going to evaluate for suitability, and make a value judgement about, in approximately 1 minute (give or take). It doesn't matter what university you attended, it doesn't matter if you had a bad year when you took your A-levels - here's your chance to shine, so take it and run with it. (As a side benefit, we cut the number of applications we have to sift by something like two thirds.) WIN! OK, yeah, sounds good, but will it actually work? That's an excellent question. My gut feeling is yes, and I'll justify why below (and hopefully have gone some way towards doing that above as well), but what I'm proposing here is really that we run an experiment for a period of time - probably a couple of months or so - and measure the outcomes we see: How many people apply? (Wouldn't be surprised or alarmed to see this cut by a factor of ten.) How many of them submit a good assessment? (More/less than at present?) How much overhead is there for us in dealing with these assessments compared to now? What are the success and failure rates at each interview stage compared to now? How many people are we hiring at the end of it compared to now? I think it'll work because I hypothesize that, amongst other things: It self-selects for people who really want to work at Red Gate which, at the moment, is something I have to try and assess based on their CV and cover letter - but if you're not that bothered about working here, why would you complete the assessment? Candidates who would submit a shoddy application probably won't feel motivated to do the assessment. Candidates who would demonstrate good attention to detail in their CV/cover letter will demonstrate good attention to detail in the assessment. In general, only the better candidates will complete and submit the assessment. Marking assessments is much less work so we'll be able to deal with any increase that we see (hopefully we will see). There are obviously other questions as well: Is plagiarism going to be a problem? Is there any way we can detect/discourage potential plagiarism? How do we assess candidates' education and experience? What about their ability to communicate in writing? Do we still want them to submit a CV afterwards if they pass assessment? Do we want to offer them the opportunity to tell us a bit about why they'd like the job when they submit their assessment? How does this affect our relationship with recruitment agencies we might use to hire for these roles? So, what's the objective for next week's Down Tools Week? Pretty simple really - we want to implement this process for the Graduate Software Engineer and Software Engineer positions that you can find on our website. I will be joined by a crack team of our best developers (Kevin Boyle, and new Red-Gater, Sam Blackburn), and recruiting hostess with the mostest Laura McQuillen, and hopefully a couple of others as well - if I can successfully twist more arms before Monday.* Hopefully by next Friday our experiment will be up and running, and we may have changed the way Red Gate recruits software engineers for good! Stay tuned and we'll let you know how it goes! *I'm going to play dirty by offering them beer and chocolate during meetings. Some background information: how agonising over the initial CV/cover letter sift helped lead us to bin it off entirely The other day I was agonising about the new university/good degree grade versus poor A-level results issue, and decided to canvas for other opinions to see if there was something I could do that was fairer than my current approach, which is almost always to reject. This generated quite an involved discussion on our Yammer site: I'm sure you can glean a pretty good impression of my own educational prejudices from that discussion as well, although I'm very open to changing my opinion - hopefully you've already figured that out from reading the rest of this post. Hopefully you can also trace a logical path from agonising about sifting to, "Uh, hang on, why on earth are we doing this anyway?!?" Technorati Tags: recruitment,hr,developers,testers,red gate,cv,resume,cover letter,assessment,sea change

    Read the article

  • Unicenter Software Delivery 4 not able to connect to MS SQL 2000 Database after W2003 SP2 upgrade

    - by grub
    Hello Everyone Yesterday I installed the Windows Server 2003 Service Pack 2 on a Windows Server 2003 which has Unicenter Software Delivery 4 installed. Prior to the installation I disabled every CA service on the server (Brightstor, SDO , RCO, TNG) and the MS SQL 2000 service. After the installation of the SP2 I enabled the services again but the Unicenter Service is not able to connect to the MS SQL 2000 Database anymore. The database itself is up and running and I can connect to it with the Enterprise Manager. A dbcc checkdb doesnt return any errors on the Unicenter database. The Unicenter service throws the following error messages during startup: IM[1] 27/05 10:38:31,272 Installation Manager in init phase IM[1] 27/05 10:38:31,694 Process IM(L) - [004152] failed to open database SDDATA. dbopen() call failed. IM[1] 27/05 10:38:31,694 sqls error details: IM[1] 27/05 10:38:31,694 (null) IM[1] 27/05 10:38:32,069 ##EXCEPTION## TableError T@:PS_SQLS\isam_db.cxx:744. IM[1] 27/05 10:38:32,069 ##EXCEPTION## TableError C@:TaskmgrL\ASMTML.CXX:596. IM[1] 27/05 10:38:32,069 ##EXCEPTION## ErrorCode: 4711 in SDDATA:Isam::Isam. Process IM(L) - [004152] failed to open database SDDATA. dbopen() call failed. IM[1] 27/05 10:38:32,069 sqls error details: IM[1] 27/05 10:38:32,069 (null) IM[1] 27/05 10:38:32,069 returned 0. IM[1] 27/05 10:38:32,084 Persistent Storage could not be opened. Error cause is found in the ASM Event Log. Restart Task Manager. IM[1] 27/05 10:38:32,084 Failed to open database. IM[1] 27/05 10:38:32,084 Installation Manager ends> If I check the Unicenter configutation with *chkmib_l* the tool throws an exception and creates a small dump file. An Exception Occurred: Time: 27/05 09:49:38,928 Reason: ChkMIB_l.exe caused an UNKNOWN_EXCEPTION in module kernel32.dll at 7C82001B:77E4BEE7 Registers: EAX=0012F908 EBX=00000000 ECX=00000000 EDX=02410004 ESI=0012F998 EDI=0012F998 EBP=0012F958 ESP=0012F904 EIP=77E4BEE7 FLG=00000206 CS =7C82001B DS =B90023 SS =120023 ES =120023 FS =7C82003B GS =3F0000 Call Stack: 7C82001B:77E4BEE7 (0xE06D7363 0x00000001 0x00000003 0x0012F98C) kernel32.dll 7C82001B:77BB3259 (0x0012F9B8 0x2B017C50 0x2B024404 0x00B68C98) MSVCRT.dll 7C82001B:2B010C42 (0x00020003 0x010C00FE 0x003F0190 0x00B69050) PS.dll << SOFTWARE DELIVERY INSTANCE INFO >> TRIGGER 0(1) instances: JCE 0(1) instances: TM 0(1) instances: IM 0(1) instances: DM 0(1) instances: DPU 0(71) instances: NATF 0(1) instances: MIBCONV 0(0) instances: API 0(4) instances: DTSFT 0(0) instances: TNGPOP 0(0) instances: DGATE 0(0) instances: << FLUSHING MEMORY TRACES >> << STOP FLUSHING MEMORY TRACES >> I compared the configuration of the SDO service and the system configuration with another server on which the Windows Server 2003 SP2 is installed and SDO is working. The configuration is the same and the same driver and software versions are used. Do you have any idea what causes the connection issue? Should I deinstall the unicenter service and make a fresh installation on the server or should I remove the Windows Server 2003 SP2? I don't want to remove the SP2 because it's a requirement for WSUS3 SP2 and I really don't want to know how many possible exploits are possible in such an old system ;-) Thank you very much and have a nice day. Below you can find more detailed information about the system and the SDO service. psinfo output (system information) System information for \\CZZAAS1003: Uptime: 0 days 14 hours 38 minutes 50 seconds Kernel version: Microsoft Windows Server 2003, Multiprocessor Free Product type: Standard Edition Product version: 5.2 Service pack: 2 Kernel build number: 3790 Install date: 23.9.2004, 11:16:11s IE version: 6.0000 System root: C:\WINDOWS Processors: 2 Processor speed: 2.3 GHz Processor type: Intel(R) Xeon(TM) CPU Physical memory: 1024 MB Video driver: RAGE XL PCI Family (Microsoft Corporation) sdver output (Unicenter Software delivery version) Unicenter Software Delivery 4.0 SP1 I2 ENU [2901] Copyright 2004 Computer Associates International, Incorporated ms sql 2000 version and odbc driver version MS SQL 2000 Server Standard Edition Product Version: 8.00.760 (SP3) ODBC Driver: SQL Server - Version 2000.86.3959.00 complete Unicenter Software delivery service log file TRIGGER[1] 27/05 10:38:28,366 SD Trigger Agent has started NATF[1] 27/05 10:38:28,928 Initiation phase finished IM[1] 27/05 10:38:31,272 Installation Manager in init phase IM[1] 27/05 10:38:31,694 Process IM(L) - [004152] failed to open database SDDATA. dbopen() call failed. IM[1] 27/05 10:38:31,694 sqls error details: IM[1] 27/05 10:38:31,694 (null) IM[1] 27/05 10:38:32,069 ##EXCEPTION## TableError T@:PS_SQLS\isam_db.cxx:744. IM[1] 27/05 10:38:32,069 ##EXCEPTION## TableError C@:TaskmgrL\ASMTML.CXX:596. IM[1] 27/05 10:38:32,069 ##EXCEPTION## ErrorCode: 4711 in SDDATA:Isam::Isam. Process IM(L) - [004152] failed to open database SDDATA. dbopen() call failed. IM[1] 27/05 10:38:32,069 sqls error details: IM[1] 27/05 10:38:32,069 (null) IM[1] 27/05 10:38:32,069 returned 0. IM[1] 27/05 10:38:32,084 Persistent Storage could not be opened. Error cause is found in the ASM Event Log. Restart Task Manager. IM[1] 27/05 10:38:32,084 Failed to open database. IM[1] 27/05 10:38:32,084 Installation Manager ends TM[1] 27/05 10:38:32,116 Task Manager in init phase TM[1] 27/05 10:38:32,334 Process TM(L) - [006132] failed to open database SDDATA. dbopen() call failed. TM[1] 27/05 10:38:32,334 sqls error details: TM[1] 27/05 10:38:32,334 (null) TM[1] 27/05 10:38:32,381 ##EXCEPTION## TableError T@:PS_SQLS\isam_db.cxx:744. TM[1] 27/05 10:38:32,381 ##EXCEPTION## TableError C@:TaskmgrL\ASMTML.CXX:596. TM[1] 27/05 10:38:32,381 ##EXCEPTION## ErrorCode: 4711 in SDDATA:Isam::Isam. Process TM(L) - [006132] failed to open database SDDATA. dbopen() call failed. TM[1] 27/05 10:38:32,381 sqls error details: TM[1] 27/05 10:38:32,381 (null) TM[1] 27/05 10:38:32,381 returned 0. TM[1] 27/05 10:38:32,381 Persistent Storage could not be opened. Error cause is found in the ASM Event Log. Restart Task Manager. TM[1] 27/05 10:38:32,381 Failed to open database. TM[1] 27/05 10:38:32,381 Task Manager ends DM[1] 27/05 10:38:33,272 Dialogue Manager is now active API[1] 27/05 10:38:34,397 API Server Process in init phase API[1] 27/05 10:38:34,397 API - SDNLS_Init API[1] 27/05 10:38:34,397 API - connectEM API[1] 27/05 10:38:34,412 API - apiServ.init DM[1] 27/05 10:38:34,678 **AND** 1 Agents triggered API[1] 27/05 10:38:34,709 Process API(L) - [005680] failed to open database SDDATA. dbopen() call failed. API[1] 27/05 10:38:34,709 sqls error details: API[1] 27/05 10:38:34,709 (null) API[1] 27/05 10:38:34,756 ##EXCEPTION## TableError T@:PS_SQLS\isam_db.cxx:744. API[1] 27/05 10:38:34,756 ##EXCEPTION## TableError C@:MainAPIL\APISERVL.CXX:246. API[1] 27/05 10:38:34,756 ##EXCEPTION## ErrorCode: 4711 in SDDATA:Isam::Isam. Process API(L) - [005680] failed to open database SDDATA. dbopen() call failed. API[1] 27/05 10:38:34,756 sqls error details: API[1] 27/05 10:38:34,756 (null) API[1] 27/05 10:38:34,756 returned 0. API[1] 27/05 10:38:34,756 Open of the database failed. API[1] 27/05 10:38:34,756 API - apiServ.init complete API[1] 27/05 10:38:34,756 API - start_APIServer DM[1] 27/05 10:38:34,803 CZZAAR1037 DPU[1:CZZAAR1037] 27/05 10:38:35,772 DPU in init phase DPU[1:CZZAAR1037] 27/05 10:38:36,100 >> GetManagerData DPU[1:CZZAAR1037] 27/05 10:38:36,287 >> SetCompInfo DPU[1:CZZAAR1037] 27/05 10:38:36,334 >> GetContainerList DPU[1:CZZAAR1037] 27/05 10:38:36,350 getJobState 3 from 5b6ad DPU[1:CZZAAR1037] 27/05 10:38:36,350 getJobState 3 from 5b6ad DPU[1:CZZAAR1037] 27/05 10:38:36,350 getJobState 3 from 5b6b7 DPU[1:CZZAAR1037] 27/05 10:38:36,350 getJobState 3 from 5b6b7 DPU[1:CZZAAR1037] 27/05 10:38:36,350 getJobState 3 from 5b6c1 DPU[1:CZZAAR1037] 27/05 10:38:36,350 getJobState 3 from 5b6c1 DPU[1:CZZAAR1037] 27/05 10:38:36,366 getJobState 3 from 5b6cb DPU[1:CZZAAR1037] 27/05 10:38:36,366 getJobState 3 from 5b6cb DPU[1:CZZAAR1037] 27/05 10:38:36,366 getJobState 3 from 5b6f9 DPU[1:CZZAAR1037] 27/05 10:38:36,366 getJobState 3 from 5b6f9 DPU[1:CZZAAR1037] 27/05 10:38:36,366 getJobState 3 from 5b71a DPU[1:CZZAAR1037] 27/05 10:38:36,366 getJobState 3 from 5b71a DPU[1:CZZAAR1037] 27/05 10:38:36,366 getJobState 3 from 5b724 DPU[1:CZZAAR1037] 27/05 10:38:36,381 getJobState 3 from 5b724 DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b72e DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b72e DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b738 DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b738 DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b742 DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b742 DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b74c DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b74c DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b756 DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b756 DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b78a DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b78a DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b7af DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b7af DPU[1:CZZAAR1037] 27/05 10:38:36,522 >> SetCompAttr DPU[1:CZZAAR1037] 27/05 10:38:36,569 >> SetDetected DPU[1:CZZAAR1037] 27/05 10:38:36,584 disconnect DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b6ad DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b6b7 DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b6c1 DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b6cb DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b6f9 DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b71a DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b724 DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b72e DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b738 DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b742 DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b74c DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b756 DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b78a DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b7af DPU[1:CZZAAR1037] 27/05 10:38:36,584 DPU ends DM[1] 27/05 10:38:38,006 **AND** 0 Agents triggered JCE[1] 27/05 10:38:38,053 JCE starts DM[1] 27/05 10:38:38,287 CZZAAS1003 DPU[2:CZZAAS1003] 27/05 10:38:38,412 DPU in init phase DPU[2:CZZAAS1003] 27/05 10:38:38,647 >> GetManagerData DPU[2:CZZAAS1003] 27/05 10:38:38,756 >> SetCompInfo DPU[2:CZZAAS1003] 27/05 10:38:38,787 >> GetContainerList DM[1] 27/05 10:38:38,850 **AND** 1 Agents triggered DM[1] 27/05 10:38:38,928 CZZAAR1124 DPU[3:CZZAAR1124] 27/05 10:38:39,053 DPU in init phase DPU[3:CZZAAR1124] 27/05 10:38:39,272 >> GetManagerData DM[1] 27/05 10:38:39,334 **AND** 1 Agents triggered DPU[3:CZZAAR1124] 27/05 10:38:39,381 >> SetCompInfo DPU[3:CZZAAR1124] 27/05 10:38:39,412 >> GetContainerList DM[1] 27/05 10:38:39,412 CZZAAR1125 DPU[3:CZZAAR1124] 27/05 10:38:39,428 getJobState 3 from 5b88e DPU[3:CZZAAR1124] 27/05 10:38:39,428 getJobState 3 from 5b88e DPU[2:CZZAAS1003] 27/05 10:38:39,491 >> SetCompAttr DPU[3:CZZAAR1124] 27/05 10:38:39,522 >> SetCompAttr DPU[4:CZZAAR1125] 27/05 10:38:39,522 DPU in init phase DPU[3:CZZAAR1124] 27/05 10:38:39,584 >> SetDetected DPU[2:CZZAAS1003] 27/05 10:38:39,584 >> SetDetected DPU[3:CZZAAR1124] 27/05 10:38:39,584 disconnect DPU[3:CZZAAR1124] 27/05 10:38:39,600 getJobState 3 from 5b88e DPU[3:CZZAAR1124] 27/05 10:38:39,600 DPU ends DPU[2:CZZAAS1003] 27/05 10:38:39,631 disconnect DPU[2:CZZAAS1003] 27/05 10:38:39,631 DPU ends DPU[4:CZZAAR1125] 27/05 10:38:39,756 >> GetManagerData DPU[4:CZZAAR1125] 27/05 10:38:39,850 >> SetCompInfo DPU[4:CZZAAR1125] 27/05 10:38:39,881 >> GetContainerList DPU[4:CZZAAR1125] 27/05 10:38:39,897 getJobState 3 from 5b8a9 DPU[4:CZZAAR1125] 27/05 10:38:39,897 getJobState 3 from 5b8a9 DPU[4:CZZAAR1125] 27/05 10:38:39,991 >> SetCompAttr DPU[4:CZZAAR1125] 27/05 10:38:40,100 >> SetDetected DPU[4:CZZAAR1125] 27/05 10:38:40,116 disconnect DPU[4:CZZAAR1125] 27/05 10:38:40,116 getJobState 3 from 5b8a9 DPU[4:CZZAAR1125] 27/05 10:38:40,116 DPU ends DM[1] 27/05 10:38:40,741 **AND** 0 Agents triggered JCE[1] 27/05 10:38:42,756 JCE ends DM[1] 27/05 10:38:47,475 **AND** 0 Agents triggered DM[1] 27/05 10:38:54,241 **AND** 0 Agents triggered

    Read the article

  • Customize Team Build 2010 – Part 12: How to debug my custom activities

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application       Developers are “spoilt” persons who expect to be able to have easy debugging experiences for every technique they work with. So they also expect it when developing custom activities for the build process template. This post describes how you can debug your custom activities without having to develop on the build server itself. Remote debugging prerequisites The prerequisite for these steps are to install the Microsoft Visual Studio Remote Debugging Monitor. You can find information how to install this at http://msdn.microsoft.com/en-us/library/bt727f1t.aspx. I chose for the option to run the remote debugger on the build server from a file share. Debugging symbols prerequisites To be able to start the debugging, you need to have the pdb files on the buildserver together with the assembly. The pdb must have been build with Full Debug Info. Steps In my setup I have a development machine and a build server. To setup the remote debugging, I performed the following steps Locate on your development machine the folder C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Remote Debugger Create a share for the Remote Debugger folder. Make sure that the share (and the folder) has the correct permissions so the user on the build server has access to the share. On the build server go to the shared “Remote Debugger” folder Start msvsmon.exe which is located in the folder that represents the platform of the build server. This will open a winform application like   Go back to your development machine and open the BuildProcess solution. Start the Attach to process command (Ctrl+Alt+P) Type in the Qualifier the name of the build server. In my case the user account that has started the msvsmon is another user then the user on my development machine. In that case you have to type the qualifier in the format that is shown in the Remote Debugging Monitor (in my case LOCAL\Administrator@TFSLAB) and confirm it by pressing <Enter> Since the build service is running with other credentials, check the option “Show processes from all users”. Now the Attach to process dialog shows the TFSBuildServiceHost process Set the breakpoint in the activity you want to debug and kick of a build. Be aware that when you attach to the TFSBuildServiceHost that you debug every single build that is run by this windows service, so make sure you don’t debug the build server that is in production! You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • Creating an HttpHandler to handle request of your own extension

    - by Jalpesh P. Vadgama
    I have already posted about http handler in details before some time here. Now let’s create an http handler which will handle my custom extension. For that we need to create a http handlers class which will implement Ihttphandler. As we are implementing IHttpHandler we need to implement one method called process request and another one is isReusable property. The process request function will handle all the request of my custom extension. so Here is the code for my http handler class. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; namespace Experiement { public class MyExtensionHandler:IHttpHandler { public MyExtensionHandler() { //Implement intialization here } bool IHttpHandler.IsReusable { get { return true; } } void IHttpHandler.ProcessRequest(HttpContext context) { string excuttablepath = context.Request.AppRelativeCurrentExecutionFilePath; if (excuttablepath.Contains("HelloWorld.dotnetjalps")) { Page page = new HelloWorld(); page.AppRelativeVirtualPath = context.Request.AppRelativeCurrentExecutionFilePath; page.ProcessRequest(context); } } } } Here in above code you can see that in process request function I am getting current executable path and then I am processing that page. Now Lets create a page with extension .dotnetjalps and then we will process this page with above created http handler. so let’s create it. It will create a page like following. Now let’s write some thing in page load Event like following. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; namespace Experiement { public partial class HelloWorld : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello World"); } } } Now we have to tell our web server that we want to process request from this .dotnetjalps extension through our custom http handler for that we need to add a tag in httphandler sections of web.config like following. <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> <httpHandlers> <add verb="*" path="*.dotnetjalps" type="Experiement.MyExtensionHandler,Experiement"/> </httpHandlers> </system.web> </configuration> That’s it now run that page into browser and it will execute like following in browser That’s you.. Isn’t it cool.. Stay tuned for more.. Happy programming.. Technorati Tags: HttpHandler,ASP.NET,Extension

    Read the article

  • What’s the Difference Between Succession Management and Talent Reviews?

    - by HCM-Oracle
    By Marcie Van Houten Is there a difference or are they pieces of one holistic strategic talent process? And can you have one without the other?  First, let me give a quick definition of each.  Succession planning (or management) is about creating succession slates or talent pools in support of a critical job or position or sets thereof. And then using those plans to help mitigate risk and plan talent needs for the organization.  Talent reviews (known by other names often) are sets of meetings where managers and executives come together to review, discuss and often heatedly debate the merits and potential of their employees, and then place and sometimes calibrate that talent on a performance to potential matrix.  These are some of the most strategic conversations happening in conference rooms across the globe. I speak with a lot of organizations about their practices in this area and the answers to these questions are as varied and nuanced as there are organizations thinking about them.  Some are passionate about their talent review processes and have a very evolved and thoughtful approach.  They really know their people, where their talent is, and the opportunities they plan to offer them.  And to them that is their succession process.  They may never create a slate of named candidates for a job or assign employees to formal talent pools.   On the flip side there are other organizations that create slates and slates and often multiple talent pools to support their strategic positions.  Through these, they are able to mitigate the risk associated with having a key player leave their organization.  And for them, that is their succession process.  Some will start from the lower levels of their organization and roll up their succession plans, while other organizations only cover their top 200 executives and key positions with plans.  And then there are organizations that leverage some of all of these.  Ultimately, the goals are to increase employee engagement, reduce talent-related risk, ensure the right talent is aligned to the strategic initiatives and to drive business value.  The approaches are as unique as the organizations they represent and the business opportunities they are looking to seize upon.   And that's ok.  It's great in fact. Because one thing that is common is the recognition that the need to know your people and align your top talent to the future needs of the organization is mission critical. Sure, there are a set of commonly recognized best practices and guiding principles for all of this.  There is no one right or perfect answer.  And that is what makes this all so much darn fun.  With Talent Review and Succession Management from Oracle HCM Cloud, we’ve blended the ability to support your strategic talent review conversations with both succession plans and talent pools allowing for one very seamless and interactive process. So whether you create a lot of succession plans, only focus on talent pools, have a robust talent review process, or all of the above, Oracle has you covered. I’m looking forward to spending time with our customers at the upcoming OHUG Global Conference 2014 happening June 9-13 in Las Vegas.  It’s an opportunity for me to talk to customers about their business and how they are doing strategic talent processes like talent reviews and succession.  I hope to see you there. Marcie Van Houten brings over 20 years of management consulting, information systems and human capital management experience to her role as director of product strategy at Oracle. Ms. Van Houten has spent the past several years at Oracle working closely with customers to help drive the direction of the company's talent and succession management applications. Additionally, she spent nine years at PeopleSoft as Director of Information Systems leading human capital management implementation projects. Marcie Van Houten lives in Walnut Creek, California, and holds a MBA from Southern Methodist University in Dallas, Texas.  You can follow her on Twitter: @MarcieVH

    Read the article

  • Upgrading from MVC 1.0 to MVC2 in Visual Studio 2010 and VS2008.

    - by Sam Abraham
    With MVC2 officially released, I was involved in a few conversations regarding the feasibility of upgrading existing MVC 1.0 projects to quickly leverage the newly introduced MVC features. Luckily, Microsoft has proactively addressed this question for both Visual Studio 2008 and 2010 and many online resources discussing the upgrade process are a "Bing/Google Search" away. As I will happen to be speaking about MVC2 and Visual Studio 2010 at the Ft Lauderdale ArcSig .Net User Group Meeting on April 20th 2010 (Check http://www.fladotnet.com for more info.), I decided to include a quick demo on upgrading the NerdDinner project (which I consider the "Hello MVC World" project) from MVC 1.0 to MVC2 using Visual studio 2010 to demonstrate how simple the upgrade process is. In the next few lines, I will be briefly touching on upgrading to MVC2 for Visual Studio 2008 then discussing, in more detail, the upgrade process using Visual Studio 2010 while highlighting the advantage of its multi-targeting support. Using Visual Studio 2008 SP1 For upgrading to MVC2 Using VS2008 SP1, a Microsoft White Paper [1] presents two approaches:  1- Using a provided automated upgrade tool, 2-Manually upgrading the project. I personally prefer using the automated tool although it comes with an "AS IS" disclaimer. For those brave souls, or those who end up with no luck using the tool, detailed manual upgrade steps are also provided as a second option. Backing up the project in question is a must regardless of which route one would take to upgrade. Using Visual Studio 2010 Life is much easier for developers who already adopted Visual Studio 2010. Simply opening the MVC 1.0 solution file brings up the upgrade wizard as shown in figures 1, 2, 3 and 4. As we proceed with the upgrade process, the wizard requests confirmation on whether we choose to upgrade our target framework version to .Net 4.0 or keep the existing .Net 3.5 (Figure 5). VS2010 does a good job with multi-targeting where we can still develop .Net 3.5 applications while leveraging all the new bells and whistles that VS2010 brings to the table (Multi-targeting enables us to develop with as early as .Net 2.0 in VS2010) Figure 1 - Open Solution File Using VS2010   Figure 2 - VS2010 Conversion Wizard Figure 3- Ready To Convert To VS2010 Confirmation Screen Figure 4 - VS2010 Solution Conversion Progress Figure 5 - Confirm Target Framework Upgrade In an attempt to make my demonstration realistic, I decided to opt to keep the project targeted to the .Net 3.5 Framework.  After the successful completion of the conversion process,  a quick sanity check revealed that the NerdDinner project is still targeted to the .Net 3.5 framework as shown in figure 6. Inspecting the Web.Config revealed that the MVC DLL version our code compiles against has been successfully upgraded to 2.0 (Figure 7) and hence we should now be able to leverage the newly introduced features in MVC2 and VS2010 with no effort or time invested on modifying existing code. Figure 6- Confirm Target Framework Remained .Net 3.5  Figure 7 - Confirm MVC DLL Version Has Been Upgraded In Conclusion, Microsoft has empowered developers with the tools necessary to quickly and seamlessly upgrade their MVC solutions to the newly released MVC2. The multi-targeting feature in Visual Studio 2010 enables us to adopt this latest and greatest development tool while supporting development in as early as .Net 2.0. References 1. "Upgrading an ASP.NET MVC 1.0 Application to ASP.NET MVC 2" http://www.asp.net/learn/whitepapers/aspnet-mvc2-upgrade-notes

    Read the article

  • WebCenter Customer Spotlight: Texas Industries, Inc.

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryTexas Industries, Inc. (TXI) is a leading supplier of cement, aggregate, and consumer product building materials for residential, commercial, and public works projects. TXI is based in Dallas and employs around 2,000 employees. The customer had the challenge of decentralized and manual processes for entering 180,000 vendor invoices annually.  Invoice entry was a time- and resource-intensive process that entailed significant personnel requirements. TXI implemented a centralized solution leveraging Oracle WebCenter Imaging, a smart routing solution that enables users to capture invoices electronically with Oracle WebCenter Capture and Oracle WebCenter Forms Recognition to send  the invoices through to Oracle Financials for approvals and processing.  TXI significantly lowered resource needs for payable processing,  increase productivity by 80% and reduce invoice processing cycle times by 84%—from 20 to 30 days to just 3 to 5 days, on average. Company OverviewTexas Industries, Inc. (TXI) is a leading supplier of cement, aggregate, and consumer product building materials for residential, commercial, and public works projects. With operating subsidiaries in six states, TXI is the largest producer of cement in Texas and a major producer in California. TXI is a major supplier of stone, sand, gravel, and expanded shale and clay products, and one of the largest producers of bagged cement and concrete  products in the Southwest. Business ChallengesTXI had the challenge of decentralized and manual processes for entering 180,000 vendor invoices annually.  Invoice entry was a time- and resource-intensive process that entailed significant personnel requirements. Their business objectives were: Increase the efficiency of core business processes, such as invoice processing, to support the organization’s desire to maintain its role as the Southwest’s leader in delivering high-quality, low-cost products to the construction industry Meet the audit and regulatory requirements for achieving Sarbanes-Oxley (SOX) compliance Streamline entry of 180,000 invoices annually to accelerate processing, reduce errors, cut invoice storage and routing costs, and increase visibility into payables liabilities Solution DeployedTXI replaced a resource-intensive, paper-based, decentralized process for invoice entry with a centralized solution leveraging Oracle WebCenter Imaging 11g. They worked with the Oracle Partner Keste LLC to develop a smart routing solution that enables users to capture invoices electronically with Oracle WebCenter Capture and then uses Oracle WebCenter Forms Recognition and the Oracle WebCenter Imaging workflow to send the invoices through to Oracle Financials for approvals and processing. Business Results Significantly lowered resource needs for payable processing through centralization and improved efficiency  Enabled the company to process invoices faster and pay bills earlier, allowing it to take advantage of additional vendor discounts Tracked to increase productivity by 80% and reduce invoice processing cycle times by 84%—from 20 to 30 days to just 3 to 5 days, on average Achieved a 25% reduction in paper invoice storage costs now that invoices are captured digitally, and enabled a 50% reduction in shipping costs, as the company no longer has to send paper invoices between headquarters and production facilities for approvals “Entering and manually processing more than 180,000 vendor invoices annually was time and labor intensive. With Oracle Imaging and Process Management, we have automated and centralized invoice entry and processing at our corporate office, improving productivity by 80% and reducing invoice processing cycle times by 84%—a very important efficiency gain.” Terry Marshall, Vice President of Information Services, Texas Industries, Inc. Additional Information TXI Customer Snapshot Oracle WebCenter Content Oracle WebCenter Capture Oracle WebCenter Forms Recognition

    Read the article

  • What's New in Oracle's EPM System?

    - by jmorourke
    Oracle’s EPM System R11.1.2.2  is now generally available to customers and partners on the download center.  Although the release number doesn’t sound significant, this is a major release of Oracle’s Hyperion EPM Suite with new modules as well as significant enhancements across the suite.  This release was announced back on April 4th as part of Oracle’s Business Analytics Strategy launch, so analytics is a key aspect of the release.  But the three biggest pieces of news in this release are Oracle Hyperion Planning support for the Exalytics In-Memory Machine, the new Project Financial Planning Application and the new Account Reconciliations Manager module. The Oracle Exalytics In-Memory Machine was announced back in October 2011, at Oracle OpenWorld.  It’s the latest installment from Oracle in a line of engineered systems that combine Oracle Sun hardware, with Oracle database and application technologies – in solutions that are designed to provide high scalability and performance for specific tasks.  Exalytics is the first engineered system specifically designed for high performance analytics.  Running in-memory versions of Oracle Essbase, as well as the Oracle TimesTen database and Oracle BI tools, Exalytics provides speed of thought response times for complex analytic processes with advanced visualizations.  Early adopter customers have achieved 5X to 100X faster interactivity and 6X to 10X faster planning cycles.  Hyperion Planning running with Oracle Exalytics will support enterprise-wide planning, budgeting and forecasting with more detailed data, with hundreds to thousands of users across an organization getting speed of thought performance. The new Hyperion Project Financial Planning application delivered with EPM 11.1.2.2 is also great news for Oracle customers.  This application follows on the heels of other special-purpose planning applications that Oracle has delivered for Workforce and Capital Asset planning.  It allows Project Managers to identify project-related expenses and revenues, plan and propose new projects, and track results over time. Finance Managers can evaluate and compare different projects, manage the funding process, monitor and report the actual financial results and impacts of projects and project portfolios. This new application is applicable to capital projects, contract projects and indirect projects like IT and HR projects across all industries.  This application is a great complement to existing Project Management applications, and helps bridge the gap between these applications, and the financial planning and budgeting process. Account reconciliations has to be one of the biggest bottlenecks and risks in the financial close and reporting process, and many organizations rely on spreadsheets and manual processes to perform this critical process.  To help address this problem, Oracle developed an Account Reconciliation Manager module that is being delivered as part of Oracle Hyperion Financial Close Management.   This module helps automate and streamline account reconciliations and eliminates the chances for errors, omissions and fraud.  But unlike standalone account reconciliation packages, it’s integrated with the rest of the Oracle Hyperion Financial Close suite, and can integrate balances from any source system.  This can help alleviate a major bottleneck in the financial close process, increase accuracy and reduce risk, and can complement existing investments in Hyperion Financial Management, as well as Oracle and non-Oracle transaction processing systems. Other enhancements in this release include an enhanced Web 2.0 interface for Hyperion Planning and Hyperion Financial Management (HFM), configurable dimensionality in HFM, new Predictive Planning feature in Hyperion Planning, new Detailed Profitability feature in Hyperion Profitability and Cost Management, new Smart View interface for Hyperion Strategic Finance, and integration of the Hyperion applications with JD Edwards Financials. For more information about Oracle EPM System R11.1.2.2 check out the links below: Press Release:  http://www.oracle.com/us/corporate/press/1575775 Product Information on O.com:  http://www.oracle.com/us/solutions/business-analytics/overview/index.html Product Information on OTN:  http://www.oracle.com/technetwork/middleware/epm/downloads/index.html Webcast Replay:  http://www.oracle.com/us/go/index.html?Src=7317510&Act=65&pcode=WWMK11054701MPP046 Please contact me if you have any questions or need additional information – [email protected]

    Read the article

  • Copy New Files Only in .NET

    - by psheriff
    Recently I had a client that had a need to copy files from one folder to another. However, there was a process that was running that would dump new files into the original folder every minute or so. So, we needed to be able to copy over all the files one time, then also be able to go back a little later and grab just the new files. After looking into the System.IO namespace, none of the classes within here met my needs exactly. Of course I could build it out of the various File and Directory classes, but then I remembered back to my old DOS days (yes, I am that old!). The XCopy command in DOS (or the command prompt for you pure Windows people) is very powerful. One of the options you can pass to this command is to grab only newer files when copying from one folder to another. So instead of writing a ton of code I decided to simply call the XCopy command using the Process class in .NET. The command I needed to run at the command prompt looked like this: XCopy C:\Original\*.* D:\Backup\*.* /q /d /y What this command does is to copy all files from the Original folder on the C drive to the Backup folder on the D drive. The /q option says to do it quitely without repeating all the file names as it copies them. The /d option says to get any newer files it finds in the Original folder that are not in the Backup folder, or any files that have a newer date/time stamp. The /y option will automatically overwrite any existing files without prompting the user to press the "Y" key to overwrite the file. To translate this into code that we can call from our .NET programs, you can write the CopyFiles method presented below. C# using System.Diagnostics public void CopyFiles(string source, string destination){  ProcessStartInfo si = new ProcessStartInfo();  string args = @"{0}\*.* {1}\*.* /q /d /y";   args = string.Format(args, source, destination);   si.FileName = "xcopy";  si.Arguments = args;  Process.Start(si);} VB.NET Imports System.Diagnostics Public Sub CopyFiles(source As String, destination As String)  Dim si As New ProcessStartInfo()  Dim args As String = "{0}\*.* {1}\*.* /q /d /y"   args = String.Format(args, source, destination)   si.FileName = "xcopy"  si.Arguments = args  Process.Start(si)End Sub The CopyFiles method first creates a ProcessStartInfo object. This object is where you fill in name of the command you wish to run and also the arguments that you wish to pass to the command. I created a string with the arguments then filled in the source and destination folders using the string.Format() method. Finally you call the Start method of the Process class passing in the ProcessStartInfo object. That's all there is to calling any command in the operating system. Very simple, and much less code than it would have taken had I coded it using the various File and Directory classes. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled Silverlight XAML for the Complete Novice - Part 1.  

    Read the article

  • Performing an upgrade from TFS 2008 to TFS 2010

    - by Enrique Lima
    I recently had to go through the process of migrating a TFS 2008 SP1 to a TFS 2010 environment. I will go into the details of the tasks that I went through, but first I want to explain why I define it as a migration and not an upgrade. When this environment was setup, based on support and limitations for TFS 2008, we used a 32 bit platform for the TFS Application Tier and Build Servers.  The Data Tier, since we were installing SP1 for TFS 2008, was done as a 64 bit installation.  We knew at that point that TFS 2010 was in the picture so that served as further motivation to make that a 64bit install of SQL Server.  The SQL Server at that point was a single instance (Default) installation too.  We had a pretty good strategy in place for backups of the databases supporting the environment (and this made the migration so much smoother), so we were pretty familiar with the databases and the purpose they serve. I am sure many of you that have gone through a TFS 2008 installation have encountered challenges and trials.  And likely even more so if you, like me, needed to configure your deployment for SSL.  So, frankly I was a little concerned about the process of migrating.  They say practice makes perfect, and this environment I worked on is in some way my brain child, so I was not ready nor willing for this to be a failure or something that would impact my client’s work. Prior to going through the migration process, we did the install of the environment.  The Data Tier was the same, with a new Named instance in place to host the 2010 install.  The Application Tier was in place too, and we did the DefaultCollection configuration to test and validate all components were in place as they should. Anyway, on to the tasks for the migration (thanks to Martin Hinselwood for his very thorough documentation): Close access to TFS 2008, you want to make sure all code is checked in and ready to go.  We stated a difference of 8 hours between code lock and the start of migration to give time for any unexpected delay.  How do we close access?  Stop IIS. Backup your databases.  Which ones? TfsActivityLogging TfsBuild TfsIntegration TfsVersionControl TfsWorkItemTracking TfsWorkItemTrackingAttachments Restore the databases to the new Named Instance (make sure you keep the same names) Now comes the fun part! The actual import/migration of the databases.  A couple of things happen here. The TfsIntegration database will be scanned, the other databases will be checked to validate they exist.  Those databases will go through a process of data being extracted and transferred to the TfsVersionControl database to then be renamed to Tfs_<Collection>. You will be using a tool called tfsconfig and the option import. This tool is located in the TFS 2010 installation path (C:\Program Files\Microsoft Team Foundation Server 2010\Tools),  the command to use is as follows:    tfsconfig import /sqlinstance:<instance> /collectionName:<name> /confirmed Where <instance> is going to be the SQL Server instance where you restored the databases to.  <name> is the name you will give the collection. And to explain /confirmed, well this means you have done a backup of the databases, why?  well remember you are going to merge the databases you restored when you execute the tfsconfig import command. The process will go through about 200 tasks, once it completes go to Team Foundation Server Administration Console and validate your imported databases and contents. We’ll keep this manageable, so the next post is about how to complete that implementation with the SSL configuration.

    Read the article

  • How can I thoroughly evaluate a prospective employer?

    - by glenviewjeff
    We hear much about code smells, test smells, and even project smells, but I have heard no discussion about employer "smells" outside of the Joel Test. After much frustration working for employers with a bouquet of unpleasant corporate-culture odors, I believe it's time for me to actively seek a more mature development environment. I've started assembling a list of questions to help vet employers by identifying issues during a job interview, and am looking for additional ideas. I suppose this list could easily be modified by an employer to vet an employee as well, but please answer from the interviewee's perspective. I think it would be important to ask many of these questions of multiple people to find out if consistent answers are given. For the most part, I tried to put the questions in each section in the order they could be asked. An undesired answer to an early question will often make follow-ups moot. Values What constitutes "well-written" software? What attributes does a good developer have? Same question for manager. Process Do you have a development process? How rigorously do you follow it? How do you decide how much process to apply to each project? Describe a typical project lifecycle. Ask the following if they don't come up otherwise: Waterfall/iterative: How much time is spent in upfront requirements gathering? upfront design? Testing Who develops tests (developers or separate test engineers?) When are they developed? When are the tests executed? How long do they take to execute? What makes a good test? How do you know you've tested enough? What percentage of code is tested? Review What is the review process like? What percentage of code is reviewed? Design? How frequently can I expect to participate as code/design reviewer/reviewee? What are the criteria applied to review and where do the criteria come from? Improvement What new tools and techniques have you evaluated or deployed in the past year? What training courses have your employees been given in the past year? What will I be doing for the first six months in your company (hinting at what kind of organized mentorship/training has been thought through, if any) What changes to your development process have been made in the past year? How do you improve and learn from your mistakes as an organization? What was your organizations biggest mistake in the past year, and how was it addressed? What feedback have you given to management lately? Was it implemented? If not, why? How does your company use "best practices?" How do you seek them out from the outside or within, and how do you share them with each other? Ethics Tell me about an ethical problem you or your employees experienced recently and how was it resolved? Do you use open-source software? What open-source contributions have you made? Follow-Ups I liked what @jim-leonardo said on this Stack Overflow question: Really a thing to ask yourself: "Does this person seem like they are trying to recruit me and make me interested?" I think this is one of the most important bits. If they seem to be taking the attitude that the only one being interviewed is you, then they probably will treat you poorly. Good interviewers understand they have to sell the position as much as the candidate needs to sell themselves. @SethP added: Glassdoor.com is a good web site for researching potential employers. It contains information about how specific companies conduct interviews...

    Read the article

  • Drinking Our Own Champagne: Fusion Accounting Hub at Oracle

    - by Di Seghposs
    A guest post by Corey West, Senior Vice President, Oracle's Corporate Controller and Chief Accounting Officer There's no better story to tell than one about Oracle using its own products with blowout success. Here's how this one goes. As you know, Oracle has increased its share of the software market through a number of high-profile acquisitions. Legally combining companies is a very complicated process -- it can take months to complete, especially for the acquisitions with offices in several countries, each with its own unique laws and regulations. It's a mission critical and time sensitive process to roll an acquired company's legacy systems (running vital operations, such as accounts receivable and general ledger (GL)) into the existing systems at Oracle. To date, we've run our primary financial ledgers in E-Business Suite R12 -- and we've successfully met the requirements of the business and closed the books on time every single quarter. But there's always room for improvement and that comes in the form of Fusion Applications. We are now live on Fusion Accounting Hub (FAH), which is the first critical step in moving to a full Fusion Financials instance. We started with FAH so that we could design a global chart of accounts. Eventually, every transaction in every country will originate from this global chart of accounts -- it becomes the structure for managing our business more uniformly. In conjunction, we're using Oracle Hyperion Data Relationship Management (DRM) to centralize and automate governance of our global chart of accounts and related hierarchies, which will help us lower our costs and greatly reduce risk. Each month, we have to consolidate data from our primary general ledgers. We have been able to simplify this process considerably using FAH. We can now submit our primary ledgers running in E-Business Suite (EBS) R12 directly to FAH, eliminating the need for more than 90 redundant consolidation ledgers. Also we can submit incrementally, so if we need to book an adjustment in a primary ledger after close, we can do so without re-opening it and re-submitting. As a result, we have earlier visibility to period-end actuals during the close. A goal of this implementation, and one that we successfully achieved, is that we are able to use FAH globally with no customization. This means we have the ability to fully deploy ledger sets at the consolidation level, plus we can use standard functionality for currency translation and mass allocations. We're able to use account monitoring and drill down functionality from the consolidation level all the way through to EBS primary ledgers and sub-ledgers, which allows someone to click through a transaction appearing at the consolidation level clear through to its original source, a significant productivity enhancement when doing research. We also see a significant improvement in reporting using Essbase cube and Hyperion Smart View. Specifically, "the addition of an Essbase cube on top of the GL gives us tremendous versatility to automate and speed our elimination process," says Claire Sebti, Senior Director of Corporate Accounting at Oracle. A highlight of this story is that FAH is running in a co-existence environment. Our plan is to move to Fusion Financials in steps, starting with FAH. Next, our Oracle Financial Services Software subsidiary will move to a full Fusion Financials instance. Then we'll replace our EBS instance with Fusion Financials. This approach allows us to plan in steps, learn as we go, and not overwhelm our teams. It also reduces the risk that comes with moving the entire instance at once. Maria Smith, Vice President of Global Controller Operations, is confident about how they've positioned themselves to uptake more Fusion functionality and is eager to "continue to drive additional efficiency and cost savings." In this story, the happy customers are Oracle controllers, financial analysts, accounting specialists, and our management team that get earlier access to more flexible reporting. "Fusion Accounting Hub simplifies our processes and gives us more transparency into account activity," raves Alex SanJuan, Senior Director, Record to Report Strategic Process Owner. Overall, the team has been very impressed with the usability and functionality of FAH and are pleased with the quantifiable improvements. Claire Sebti states, "Our WD5 close activities have been reduced by at least four hours of system processing time, just for the consolidation group." Fusion Accounting Hub is an inspiring beginning to our Fusion Financials implementation story. There's no doubt it's going to be an international bestseller! Corey West, Senior Vice President Oracle's Corporate Controller and Chief Accounting Officer

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >