Search Results

Search found 4853 results on 195 pages for 'continuous integration'.

Page 24/195 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Tomcat SSL integration issue

    - by small_ticket
    Hi all, I've bought a wildcard ssl certificate from a company, i sent them the csr file and they send me two certificate files namely CA.txt and com_sertificate. I've searched on web and find some tutorials about tomcat and ssl but i can not accomplish with these two files. All that tutorials mention about different files that i don't have. (I asked about this process to the company that i bought certificates but they said they don't have any knowledge about tomcat integration) Is there anyone that has an idea about this? p.s I'm using ubuntu 8.04 server, Java 1.6 and tomcat 6

    Read the article

  • Using Find/Replace with regular expressions inside a SSIS package

    - by jamiet
    Another one of those might-be-useful-again-one-day-so-I’ll-share-it-in-a-blog-post blog posts I am currently working on a SQL Server Integration Services (SSIS) 2012 implementation where each package contains a parameter called ETLIfcHist_ID: During normal execution this will get altered when the package is executed from the Execute Package Task however we want to make sure that at deployment-time they all have a default value of –1. Of course, they tend to get changed during development so I wanted a way of easily changing them all back to the default value. Opening up each package in turn and editing them was an option but given that we have over 40 packages and we might want to carry out this reset fairly frequently I needed a more automated method so I turned to Visual Studio’s Find/Replace… feature Of course, we don’t know what value will be in that parameter so I can’t simply search for a particular value; hence I opted to use a regular expression to identify the value to be change. In the rest of this blog post I’ll explain how to do that. For demonstration purposes I have taken the contents of a .dtsx file and stripped out everything except the element containing the parameters (<DTS:PackageParameters>), if you want to play along at home you can copy-paste the XML document below into a new XML file and open it up in Visual Studio: <?xml version="1.0"?> <DTS:Executable xmlns:DTS="www.microsoft.com/SqlServer/Dts">   <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="Some other description"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25845C7E}"       DTS:ObjectName="SomeOtherObjectName">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">SomeOtherValue</DTS:Property>     </DTS:PackageParameter>   </DTS:PackageParameters> </DTS:Executable> We are trying to identify the value of the parameter whose name is ETLIfcHist_ID – notice that in the XML document above that value is VALUE_TO_BE_CHANGED. The following regular expression will find the appropriate portion of the XML document: {\<DTS\:PackageParameter[\n ]*DTS\:CreationName="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Description="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DTSID="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:ObjectName="ETLIfcHist_ID"\>[\n ]*\<DTS\:Property[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Name="ParameterValue"\>}[A-Za-z0-9\:_\{\}- ]*{\<\/DTS\:Property\>} I have highlighted the name of the parameter that we’re looking for. I have also highlighted two portions identified by pairs of curly braces “{…}”; these are important because they pick out the two portions either side of the value I want to replace, in other words the portions highlighted here: <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter> Those sections in the curly braces are termed tag expressions and can be identified in the replace expression using a backslash and a number identifying which tag expression you’re referring to according to its ordinal position. Hence, our replace expression is simply: \1-1\2 We’re saying the portion of our file identified by the regular expression should be replaced by the first curly brace section, then the literal –1, then the second curly brace section. Make sense? Give it a go yourself by plugging those two expressions into Visual Studio’s Find and Replace dialog. If you set it to look in “All Open Documents” then you can open up the code-behind of all your packages and change all of them at once. The Find and Replace dialog will look like this: That’s it! I realise that not everyone will be looking to change the value of a parameter but hopefully I have shown you a technique that you can modify to work for your own scenario. Given that this blog post is, y’know, on the web I have no doubt that someone is going to find a fault with my find regex expression and if that person is you….that’s OK. Let me know about it in the comments below and perhaps we can work together to come up with something better! Note that some parameters may have a different set of properties (for example some, but not all, of my parameters have a DTS:Required attribute) so your find regular expression may have to change accordingly. When researching this I found the following article to be invaluable: Visual Studio Find/Replace Regular Expression Usage @Jamiet

    Read the article

  • Authorize.net Payment integration

    - by Click Upvote
    I'm looking to do Authorize.net payment integration with a website using PHP. My questions are: 1) Where I can find a tutorial, development guide, and/or code samples for doing this with PHP. 2) Is it possible to obtain a test account to do the integration like Paypal's sandbox, or does one need to have a live account to which you can pass an additional parameter to indicate the transaction is a test one? All other advice will also be helpful. Thanks!

    Read the article

  • Hosting MSMQ Integration Binding with WAS

    - by foosnazzy
    Appears that WAS does not support the integration binding. I've tried setting it up by enabling the net.msmq protocol in WAS just to confirm it to no avail. Has anyone gotten this to work and if not, what are you using to host integration services? I'm leaning towards a Windows Service.

    Read the article

  • .NET integration Framework

    - by buhtla
    Is there any kind of framework that provides some generic mechanism for integration between applications or something similar? By integration I presume data exchange (import and export) between two applications via some standard interface like Web Service.

    Read the article

  • Investigation: Can different combinations of components effect Dataflow performance?

    - by jamiet
    Introduction The Dataflow task is one of the core components (if not the core component) of SQL Server Integration Services (SSIS) and often the most misunderstood. This is not surprising, its an incredibly complicated beast and we’re abstracted away from that complexity via some boxes that go yellow red or green and that have some lines drawn between them. Example dataflow In this blog post I intend to look under that facade and get into some of the nuts and bolts of the Dataflow Task by investigating how the decisions we make when building our packages can affect performance. I will do this by comparing the performance of three dataflows that all have the same input, all produce the same output, but which all operate slightly differently by way of having different transformation components. I also want to use this blog post to challenge a common held opinion that I see perpetuated over and over again on the SSIS forum. That is, that people assume adding components to a dataflow will be detrimental to overall performance. Its not surprising that people think this –it is intuitive to think that more components means more work- however this is not a view that I share. I have always been of the opinion that there are many factors affecting dataflow duration and the number of components is actually one of the less important ones; having said that I have never proven that assertion and that is one reason for this investigation. I have actually seen evidence that some people think dataflow duration is simply a function of number of rows and number of components. I’ll happily call that one out as a myth even without any investigation!  The Setup I have a 2GB datafile which is a list of 4731904 (~4.7million) customer records with various attributes against them and it contains 2 columns that I am going to use for categorisation: [YearlyIncome] [BirthDate] The data file is a SSIS raw format file which I chose to use because it is the quickest way of getting data into a dataflow and given that I am testing the transformations, not the source or destination adapters, I want to minimise external influences as much as possible. In the test I will split the customers according to month of birth (12 of those) and whether or not their yearly income is above or below 50000 (2 of those); in other words I will be splitting them into 24 discrete categories and in order to do it I shall be using different combinations of SSIS’ Conditional Split and Derived Column transformation components. The 24 datapaths that occur will each input to a rowcount component, again because this is the least resource intensive means of terminating a datapath. The test is being carried out on a Dell XPS Studio laptop with a quad core (8 logical Procs) Intel Core i7 at 1.73GHz and Samsung SSD hard drive. Its running SQL Server 2008 R2 on Windows 7. The Variables Here are the three combinations of components that I am going to test:     One Conditional Split - A single Conditional Split component CSPL Split by Month of Birth and income category that will use expressions on [YearlyIncome] & [BirthDate] to send each row to one of 24 outputs. This next screenshot displays the expression logic in use: Derived Column & Conditional Split - A Derived Column component DER Income Category that adds a new column [IncomeCategory] which will contain one of two possible text values {“LessThan50000”,”GreaterThan50000”} and uses [YearlyIncome] to determine which value each row should get. A Conditional Split component CSPL Split by Month of Birth and Income Category then uses that new column in conjunction with [BirthDate] to determine which of the same 24 outputs to send each row to. Put more simply, I am separating the Conditional Split of #1 into a Derived Column and a Conditional Split. The next screenshots display the expression logic in use: DER Income Category         CSPL Split by Month of Birth and Income Category       Three Conditional Splits - A Conditional Split component that produces two outputs based on [YearlyIncome], one for each Income Category. Each of those outputs will go to a further Conditional Split that splits the input into 12 outputs, one for each month of birth (identical logic in each). In this case then I am separating the single Conditional Split of #1 into three Conditional Split components. The next screenshots display the expression logic in use: CSPL Split by Income Category         CSPL Split by Month of Birth 1& 2       Each of these combinations will provide an input to one of the 24 rowcount components, just the same as before. For illustration here is a screenshot of the dataflow containing three Conditional Split components: As you can these dataflows have a fair bit of work to do and remember that they’re doing that work for 4.7million rows. I will execute each dataflow 10 times and use the average for comparison. I foresee three possible outcomes: The dataflow containing just one Conditional Split (i.e. #1) will be quicker There is no significant difference between any of them One of the two dataflows containing multiple transformation components will be quicker Regardless of which of those outcomes come to pass we will have learnt something and that makes this an interesting test to carry out. Note that I will be executing the dataflows using dtexec.exe rather than hitting F5 within BIDS. The Results and Analysis The table below shows all of the executions, 10 for each dataflow. It also shows the average for each along with a standard deviation. All durations are in seconds. I’m pasting a screenshot because I frankly can’t be bothered with the faffing about needed to make a presentable HTML table. It is plain to see from the average that the dataflow containing three conditional splits is significantly faster, the other two taking 43% and 52% longer respectively. This seems strange though, right? Why does the dataflow containing the most components outperform the other two by such a big margin? The answer is actually quite logical when you put some thought into it and I’ll explain that below. Before progressing, a side note. The standard deviation for the “Three Conditional Splits” dataflow is orders of magnitude smaller – indicating that performance for this dataflow can be predicted with much greater confidence too. The Explanation I refer you to the screenshot above that shows how CSPL Split by Month of Birth and salary category in the first dataflow is setup. Observe that there is a case for each combination of Month Of Date and Income Category – 24 in total. These expressions get evaluated in the order that they appear and hence if we assume that Month of Date and Income Category are uniformly distributed in the dataset we can deduce that the expected number of expression evaluations for each row is 12.5 i.e. 1 (the minimum) + 24 (the maximum) divided by 2 = 12.5. Now take a look at the screenshots for the second dataflow. We are doing one expression evaluation in DER Income Category and we have the same 24 cases in CSPL Split by Month of Birth and Income Category as we had before, only the expression differs slightly. In this case then we have 1 + 12.5 = 13.5 expected evaluations for each row – that would account for the slightly longer average execution time for this dataflow. Now onto the third dataflow, the quick one. CSPL Split by Income Category does a maximum of 2 expression evaluations thus the expected number of evaluations per row is 1.5. CSPL Split by Month of Birth 1 & CSPL Split by Month of Birth 2 both have less work to do than the previous Conditional Split components because they only have 12 cases to test for thus the expected number of expression evaluations is 6.5 There are two of them so total expected number of expression evaluations for this dataflow is 6.5 + 6.5 + 1.5 = 14.5. 14.5 is still more than 12.5 & 13.5 though so why is the third dataflow so much quicker? Simple, the conditional expressions in the first two dataflows have two boolean predicates to evaluate – one for Income Category and one for Month of Birth; the expressions in the Conditional Split in the third dataflow however only have one predicate thus they are doing a lot less work. To sum up, the difference in execution times can be attributed to the difference between: MONTH(BirthDate) == 1 && YearlyIncome <= 50000 and MONTH(BirthDate) == 1 In the first two dataflows YearlyIncome <= 50000 gets evaluated an average of 12.5 times for every row whereas in the third dataflow it is evaluated once and once only. Multiply those 11.5 extra operations by 4.7million rows and you get a significant amount of extra CPU cycles – that’s where our duration difference comes from. The Wrap-up The obvious point here is that adding new components to a dataflow isn’t necessarily going to make it go any slower, moreover you may be able to achieve significant improvements by splitting logic over multiple components rather than one. Performance tuning is all about reducing the amount of work that needs to be done and that doesn’t necessarily mean use less components, indeed sometimes you may be able to reduce workload in ways that aren’t immediately obvious as I think I have proven here. Of course there are many variables in play here and your mileage will most definitely vary. I encourage you to download the package and see if you get similar results – let me know in the comments. The package contains all three dataflows plus a fourth dataflow that will create the 2GB raw file for you (you will also need the [AdventureWorksDW2008] sample database from which to source the data); simply disable all dataflows except the one you want to test before executing the package and remember, execute using dtexec, not within BIDS. If you want to explore dataflow performance tuning in more detail then here are some links you might want to check out: Inequality joins, Asynchronous transformations and Lookups Destination Adapter Comparison Don’t turn the dataflow into a cursor SSIS Dataflow – Designing for performance (webinar) Any comments? Let me know! @Jamiet

    Read the article

  • Process spawned by exec-maven-plugin blocks the maven process

    - by Arnab Biswas
    I am trying to execute the following scenario using maven : pre-integration-phase : Start a java based application using a main class (using exec-maven-plugin) integration-phase : Run the integration test cases (using maven-failsafe-plugin) post-integration-phase: Stop the application gracefully (using exec-maven-plugin) Here is pom.xml snip: <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.2.1</version> <executions> <execution> <id>launch-myApp</id> <phase>pre-integration-test</phase> <goals> <goal>exec</goal> </goals> </execution> </executions> <configuration> <executable>java</executable> <arguments> <argument>-DMY_APP_HOME=/usr/home/target/local</argument> <argument>-Djava.library.path=/usr/home/other/lib</argument> <argument>-classpath</argument> <classpath/> <argument>com.foo.MyApp</argument> </arguments> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-failsafe-plugin</artifactId> <version>2.12</version> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> </execution> </executions> <configuration> <forkMode>always</forkMode> </configuration> </plugin> </plugins> If I execute mvn post-integration-test, my application is getting started as a child process of the maven process, but the application process is blocking the maven process from executing the integration tests which comes in the next phase. Later I found that there is a bug (or missing functionality?) in maven exec plugin, because of which the application process blocks the maven process. To address this issue, I have encapsulated the invocation of MyApp.java in a shell script and then appended “/dev/null 2&1 &” to spawn a separate background process. Here is the snip (this is just a snip and not the actual one) from runTest.sh: java - DMY_APP_HOME =$2 com.foo.MyApp > /dev/null 2>&1 & Although this solves my issue, is there any other way to do it? Am I missing any argument for exec-maven-plugin?

    Read the article

  • Redmine git integration - issue in accessing git from redmine but not from external git client

    - by Guruprasad
    I have setup redmine integration with apache as described in the redmine documentation. I have a /git path accessible with auth and /git-private accessible only to redmine. When I clone the repository through /git path, I get the up-to-date repo. But when I try to view it in redmine repo viewer, I get a 404 "The entry or revision was not found in the repository." error. Trying to clone using the git-private url in the redmine box gives a bare repository though it is the same repo as the one cloned by the /git path. I have enabled RedmineGitSmartHttp in the /git path. What could be the issue here? PerlLoadModule Apache::Redmine SetEnv GIT_PROJECT_ROOT /path/to/git/root SetEnv GIT_HTTP_EXPORT_ALL ScriptAlias /git/ /usr/lib/git-core/git-http-backend/ <Location /git> AuthType Basic Require valid-user AuthName "Git" PerlAccessHandler Apache::Authn::Redmine::access_handler PerlAuthenHandler Apache::Authn::Redmine::authen_handler RedmineDSN "DBI:mysql:database=<dbname>;host=<db host>" RedmineDbUser "<user>" RedmineDbPass "<password" RedmineGitSmartHttp yes </Location> <Location /git-private> Order deny,allow Deny from all <Limit GET PROPFIND OPTIONS REPORT> Options Indexes FollowSymLinks MultiViews Allow from <redmine public ip> Allow from <redmine pvt ip> Allow from <localhost> </Limit> </Location>

    Read the article

  • TeamCity EC2 Integration via ISA Server

    - by Tim Long
    I have a TeamCity server which is actually installed on SBS 2003 Premium with ISA Server (firewall/proxy) installed. My ADSL connection has multiple IP addresses, which all resolve directly to my SBS external NIC. The NIC is therefore multi-homed and I have allocated one of the IP addresses specifically to TeamCity. In ISA, I've created an access rule to allow the traffic in. I can access my TeamCity server externally and view the web interface, that all works fine. I want to use the Amazon EC2 integration in TeamCity to launch build agents 'in the cloud'. The problem I am having is that when the agent starts, it sees the server and registers, then just sits there waiting. On the server side, the agent appears as 'disconnected'. Examining the settings, the agent's IP address appears to be that of the external NIC. What I think might be happening is that the traffic is undergoing Network Address Translation (NAT) so that TeamCity always thinks the agent is locally installed and therefore can't communicate with the actual remote agent. This seems to happen even though I have a permanent static IP address dedicated to TeamCity. So, the question is this. How can I make traffic to a specific IP address pass through the ISA server un-NATted?

    Read the article

  • Virtualmin & git integration

    - by weby3456
    I've installed virtualmin on my VPS to manage my websites. It's working perfect and as expected nearly a year now. Recently I wanted to add some features to one of my sites, and I need git integration. I've correctly installed git & gitweb on my server, and I can create repositories and watch them under http://sub.domain.com/git/gitweb.cgi Here is the current relevant directory tree: /home/user/domains/sub.domain.com/public_html/git/ drwxr-sr-x user user . drwxr-x--- user user .. -rw-r--r-- user user git-favicon.png -rw-r--r-- user user git-logo.png -rwxr-xr-x user user gitweb.cgi -rw-r--r-- user user gitweb.css drwxrwx--- apache user reponame.git /home/user/domains/sub.domain.com/public_html/git/reponame.git/ drwxrwx--- apache user . drwxr-sr-x user user .. drwxrwx--- apache user branches -rwxrwx--- apache user config -rwxrwx--- user user description -rwxrwx--- apache user HEAD drwxrwx--- apache user hooks drwxrwx--- apache user info drwxrwx--- apache user objects drwxrwx--- apache user refs But I have some questions: When I'm visiting http://sub.domain.com/git/gitweb.cgi, the owner is listed as 'Apache'. why? how can I change that? Usually, to create a new git repository, I'll do something like: $ mkdir proj $ cd proj $ git init Initialized empty Git repository in /home/user/proj/.git/ // here I'm creating the files or copy them from somewhere else $ git add *.php $ git add README $ git commit -m 'initial version' But after creating the repository in virtualmin, I can find a new dir named 'reponame.git' but not the '.git' dir. When I'm trying to run any git command (e.g. git status) I'm receiving "fatal: This operation must be run in a work tree". How can I work with that repository? Currently I need to explicitly grant access for users to be able to view the repositories via gitweb. How can I make certain repositories public?

    Read the article

  • Apache httpd LDAP integration

    - by David W.
    I am configuring a CollabNet Subversion integration. I have the following collabnet_subversion.conf file: <Location /svn> DAV svn SVNParentPath /mnt/svn/new_repos SVNListParentPath on AuthName "VegiBanc Source Repository" AuthType basic AuthzLDAPAuthoritative off AuthBasicProvider ldap AuthLDAPURL ldap://ldap.vegibanc.com/dc=vegibanc,dc=com?sAMAccountName" NONE AuthLDAPBindDN "CN=SVN-Admin,OU=Service Accounts,OU=VegiBanc Users,OU=vegibanc,DC=vegibanc,DC=com" AuthLDAPBindPassword "swordfish" </Location> This works great. Any user in our Active Directory can access our Subversion repository. Now, I want to limit this to only people in the Active Directory group Development: <Location /svn> DAV svn SVNParentPath /mnt/svn/new_repos SVNListParentPath on AuthName "VegiBanc Source Repository" AuthType basic AuthzLDAPAuthoritative off AuthBasicProvider ldap AuthLDAPURL ldap://ldap.vegibanc.com/dc=vegibanc,dc=com?sAMAccountName" NONE AuthLDAPBindDN "CN=SVN-Admin,OU=Service Accounts,OU=VegiBanc Users,OU=VegiBanc,DC=vegibanc,DC=com" AuthLDAPBindPassword "swordfish" Require ldap-group CN=Development OU=Security Groups OU=VegiBanc, dc=vegibanc, dc=com </Location> I added Require ldap-group, but now no one can log in. I have LogLevel set to debug, but all I get is this in my error_log (Single line broken up for easier reading): [Thu Oct 11 13:09:28 2012] [info] [client 10.55.9.45] [6752] vauth_ldap authenticate: user dweintraub authentication failed; URI /svn/ [ldap_search_ext_s() for user failed][Bad search filter] And, I get this in my access_log: 10.55.9.45 - - [11/Oct/2012:13:09:27 -0500] "GET /svn/ HTTP/1.1" 401 401 10.55.9.45 - dweintraub [11/Oct/2012:13:09:28 -0500] "GET /svn/ HTTP/1.1" 500 535 Yes, I am in that group. (Or, at least how can I confirm that just to make sure that's not the issue. I have the SysinternalsSuite ADExplorer. It's where I'm getting all of my info.)

    Read the article

  • Sharepoint/WSS Reporting Services Integration woes

    - by mhollers
    after a number of failed attempts i seem to have successfully installed the Reporting services add-in to my WSS farm. However, I seem to be missing most of the enhanced functionality eg no report library template, no report center site template. the only additional functionality available is the report viewer web part. background: 2 server WSS 3.0 farm with CA (Central admin) WFE (web front end) and reporting services addin installed on 1, and SQL05 SP2 with Reporting services (RS) and all databases installed on other. I have a VM environment set up and have rolled this back and repeated a number of times. I have configured RS within CA and activated 'Report Server INtegration Feature'. Within the 'site settings' I have a 'Reporting Services' heading with a 'manage shared schedules' item/link, not sure if there should be other options? I was of the understanding that to view reports within sharepoint i could either create a new site using the 'report center' template or add a report library to an existing site, neither of which seems available I am at a loss as to what to do, as all online information seems to do with dealing with installation issues/errors, which i seem to have eventually got past

    Read the article

  • Domain changes required for SSL integration

    - by user131003
    Currently my site supports regular payment options (User is taken to Payment Gateway/PG website). Now I'm trying to implement "seamless" PG integration. I need SSL for this. I'm having a dedicated server with 5 static IPs from Hostgator/HG. options: I take SSL for www.my_domain.com. According to HG, I need to change IP of main site as current IP is not really dedicated as it is being shared by cpanel etc. So They need to bind another dedicated IP to main domain for SSL to work. This would required DNS change for main website and hence cause few hours downtime (which is ok). I've noticed that most of the e-commerce websites are using subdomains like secure.my_domain.com for ssl/https. This sounds like a better approach. But I've got few doubts in this case: a) Would I need to re-register with existing PGs (Paypal, Google Checkout, Authorize.net) if I switch to subdomain? Re-registering is not an option for me. b) Would DNS change be required for www.my_domain.com in this case. This confusion arose because of following reply from HG : "If the sub domain secure.my_domain.com is added to an existing cPanel it will use the IP for that cPanel so as long as it is a Dedicated IP that will be fine. If secure.my_domain.com gets setup as its own cPanel it will need to be assigned to a Dedicated IP which would have a DNS change involved.". PLease suggest.

    Read the article

  • Can I change the file system on the OS partition on Server 2008 R2?

    - by KCotreau
    I have a client using R1Soft Continuous Data Protection backup, and two of the Server 2008 R2 boxes were erroring out with these errors: Unable to obtain NTFS volume data for device '\\?\Volume{f612849e-7125-11e0-8772-806e6f6e6963}': Incorrect function. Unable to discover information for filesytem volume '\\?\Volume{f612849e-7125-11e0-8772-806e6f6e6963}'; Unable to obtain NTFS volume So I backed up all the registry entries with this, {f612849e-7125-11e0-8772-806e6f6e6963}, in it, and deleted them based on some VERY sparse info from R1Soft. I then decided to restore them before I rebooted, and do a system state backup first using MS backup, and even it errored out saying that there were FAT32 partitions. This was a major clue as the only two computers with problems had these FAT32 partitions. I figured if MS backup can't backup something, any other program is likely to have problems. Also, now that I realized the servers had FAT32 partitions on them, the error referencing NTFS takes on more weight. The partitions on both servers have the label "OS", but on one of the computers, it is given a letter, but on the other not. So I am thinking if I just convert the file systems from FAT32 to NTFS, it may solve the backup problem. So the question is this: Can I just convert those partitions, and does anyone have any concrete knowledge of any major downsides, like the servers not coming back up (of course, I would do one at a time)? My thinking is that the answer is probably at least 95% no, but they are production servers, so I wanted to get some second opinions.

    Read the article

  • SOA &amp; BPM Integration Days February 23rd &amp; 24th 2011 D&uuml;sseldorf Germany

    - by Jürgen Kress
    The key German SOA Experts will present at the SOA & BPM Integration Days 2011 for all German SOA users it’s the key event in 2001, make sure you register today! Speakers include: Torsten Winterberg OPITZ Hajo Normann HP Guido Schmutz Trivadis Dirk Krafzig SOAPARK Niko Köbler OPITZ Clemens Utschig-Utschig Boehringer Ingelheim Nicolai Josuttis IT communication Bernd Trops SOPERA   Berthold Maier Oracle Deutschland Jürgen Kress Oracle EMEA The agenda is exciting for all SOA and BPA experts! See you in Düsseldorf Germany!   Es ist soweit: Die Entwickler Akademie und das Magazin Business Technology präsentieren Ihnen die ersten SOA, BPM & Integration Days! Das einzigartige Trainingsevent versammelt für Sie die führenden Köpfe im deutschsprachigen Raum rund um SOA,  BPM und Integration.  Zwei Tage lang erhalten Sie ohne Marketingfilter wertvolle Informationen, Erkenntnisse und  Erfahrungen aus der täglichen Projektarbeit. Sie müssen sich nur noch entscheiden, welche persönlichen Themenschwerpunkte Sie setzen möchten. Darüber hinaus bietet sich Ihnen die einmalige Chance, ihre Fragen und Herausforderungen mit den Experten aus der Praxis zu diskutieren. In den SOA, BPM & Integration Days profitieren Sie von der geballten Praxiserfahrung der Autorenrunde des SOA Spezial Magazins (Publikation des Software & Support Verlags), bekannter Buchautoren und langjährigen Sprechern der JAX-Konferenzen. Dieses Event sollten Sie auf keinen Fall verpassen!  Details und Anmeldung unter: www.soa-bpm-days.de For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA & BPM Integration Days,SOA,BPM,Hajo Normnn,Torsten Winterberg,Clemens Utschig-Utschig,Berthold Maier,Bernd Trops,Guido Schmutz,Nicolai Josuttis,Niko Köbler,Dirk Krafzig,Jürgen Kress,Oracle,Jax,Javamagazin,entwickler adademiem,business technology

    Read the article

  • Oracle OpenWorld 2012 Hands-on Lab: “Using Oracle AIA Foundation Pack for Oracle E-Business Suite Integration”

    - by Lionel Dubreuil
    Sharpen your Oracle skill sets and master Oracle technology in Oracle OpenWorld Hands-on Labs.In self-paced, practical learning sessions covering everything from business applications to middleware, database, storage, and enterprise management solutions, you'll discover new ways to derive maximum benefits from your Oracle hardware and software solutionsOracle experts will be available in person to answer questions and guide you through each lab.Hands-on Labs fill up early, and seats are limited, so don’t be late.This HOL10233 - Using Oracle AIA Foundation Pack for Oracle E-Business Suite Integration is scheduled for: Date: Monday, Oct 1 Time: 10:45 AM - 11:45 AM Location: Marriott Marquis - Nob Hill CD In this Hands-on Lab, learn how to integrate Oracle E-Business Suite with third-party applications in your ecosystem with Oracle Application Integration Architecture (AIA) Foundation Pack.This hands-on lab focuses on SOA-based integration with Oracle E-Business Suite, using Oracle E-Business Suite Integrated SOA Gateway or Oracle Applications Adapter to orchestrate a process across disparate applications in your ecosystem.Objectives for this session are to: Learn how to design and build an integration, from functional definition to development Learn how to automatically build services with robust error handling logic Learn how to discover and reuse services available in Oracle E-Business Suite

    Read the article

  • Tuesday at Oracle OpenWorld 2012 - Must See Session: “Oracle Fusion Applications: Best Practices in Integration Design Patterns”

    - by Lionel Dubreuil
    Don’t miss this “CON8685 - Oracle Fusion Applications: Best Practices in Integration Design Patterns “ session: Speakers: Rajesh Raheja - Senior Director, Development, Oracle Ravi Sankaran - Director, Applications Development, Oracle Date: Tuesday, Oct 2 Time: 1:15 PM - 2:15 PM Location: Palace Hotel - Telegraph Oracle Fusion Applications provide various ways to integrate their functional capabilities with other Oracle applications as well as third-party and legacy applications. In this session, you will learn the patterns used when communicating with Oracle Fusion Applications with a SOA approach. It addresses items related to identifying the integration artifacts available, also known as assets, in Oracle Enterprise Repository; how to invoke synchronous and asynchronous Web services; importing and exporting bulk data; and any integration issues to look out for. The patterns will be applicable to on-premises and SaaS/cloud deployment modes and are indicated as such. Objectives for this session are to: Highlight the various ways to integrate with Oracle Fusion Applications Showcase use of Oracle Fusion Middleware technologies for integration Describe best practices and design patterns for integration Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • running a laptop continuously

    - by intuited
    I have an experienced laptop — a Dell Latitude D400, with a Pentium M CPU — that I'd like to run as an always-on server. This model was launched in 2004; I got mine second-hand in about 2007. I've heard that continuous operation is generally not a good idea with consumer hardware, but am lacking in specific knowledge about related problems, and have little idea of how much such usage patterns would reduce the lifespan of the machine. I'm mostly concerned with the unit's core components; parts such as the hard drive which are readily replaceable are, well, readily replaceable. What sorts of things can I do to increase the lifespan of this machine under such circumstances? For example, I'm guessing that it would be wise to limit the CPU frequency or take other steps to keep the internal temperature low. However, I'm not sure where the point of diminishing returns would lie with such an approach — 50°C? 40°C? Would it be useful to suspend the machine periodically, for perhaps an hour each day, or a few hours each week?

    Read the article

  • Monday at Oracle OpenWorld 2012 - Must See Session: “Using the Right Tools, Techniques, and Technologies for Integration Projects”

    - by Lionel Dubreuil
    Don’t miss this “CON8669 - Using the Right Tools, Techniques, and Technologies for Integration Projects“ session with Timothy Hall - Sr. Director, Oracle: Date: Monday, Oct 1, Time: 3:15 PM - 4:15 PM Location: Moscone South - 308 Every integration project brings its own unique set of challenges. There are many tools and techniques to choose from. How do you ensure that you have a means of consistently and repeatedly making decisions about which tools, techniques, and technologies are used? In working with many customers around the globe, Oracle has developed a set of criteria to help evaluate a variety of common integration questions. This session explores these criteria and how they have been further organized into decision trees that offer a repeatable means for ensuring that project teams are given the same guidance from project to project. Using these techniques, the presentation shows how you can reduce risk and speed productivity for your projects Objectives for this session are to: Discuss common questions that arise at the start of integration projects Review various decision criteria and approaches for getting to a consistent set of answers Explore how these techniques can be used to reduce risk and speed productivity

    Read the article

  • Monday at Oracle OpenWorld 2012 - Must See Session: “Using the Right Tools, Techniques, and Technologies for Integration Projects”

    - by Lionel Dubreuil
    Don’t miss this “CON8669 - Using the Right Tools, Techniques, and Technologies for Integration Projects“ session with Timothy Hall - Sr. Director, Oracle: Date: Monday, Oct 1, Time: 3:15 PM - 4:15 PM Location: Moscone South - 308 Every integration project brings its own unique set of challenges. There are many tools and techniques to choose from. How do you ensure that you have a means of consistently and repeatedly making decisions about which tools, techniques, and technologies are used? In working with many customers around the globe, Oracle has developed a set of criteria to help evaluate a variety of common integration questions. This session explores these criteria and how they have been further organized into decision trees that offer a repeatable means for ensuring that project teams are given the same guidance from project to project. Using these techniques, the presentation shows how you can reduce risk and speed productivity for your projects Objectives for this session are to: Discuss common questions that arise at the start of integration projects Review various decision criteria and approaches for getting to a consistent set of answers Explore how these techniques can be used to reduce risk and speed productivity

    Read the article

  • Integration testing - can it be done right?

    - by Max
    I used TDD as a development style on some projects in the past two years, but I always get stuck on the same point: how can I test the integration of the various parts of my program? What I am currently doing is writing a testcase per class (this is my rule of thumb: a "unit" is a class, and each class has one or more testcases). I try to resolve dependencies by using mocks and stubs and this works really well as each class can be tested independently. After some coding, all important classes are tested. I then "wire" them together using an IoC container. And here I am stuck: How to test if the wiring was successfull and the objects interact the way I want? An example: Think of a web application. There is a controller class which takes an array of ids, uses a repository to fetch the records based on these ids and then iterates over the records and writes them as a string to an outfile. To make it simple, there would be three classes: Controller, Repository, OutfileWriter. Each of them is tested in isolation. What I would do in order to test the "real" application: making the http request (either manually or automated) with some ids from the database and then look in the filesystem if the file was written. Of course this process could be automated, but still: doesn´t that duplicate the test-logic? Is this what is called an "integration test"? In a book i recently read about Unit Testing it seemed to me that integration testing was more of an anti-pattern?

    Read the article

  • jQuery Horizontal Ticker with Continuous loop

    - by Dkong
    I am using a jQuery ticker which is pretty cool. It works well with predefined content, but I want to build my tags dynamically by getting the data from a feed via the $.ajax method. http://progadv.uuuq.com/jStockTicker/ The problem is when I do this the ticker wont work, as it looks like the function might be loading before my page content has loaded. Can anbody think of a way around this? $(function() { $("#ticker").jStockTicker({interval: 45}); });

    Read the article

  • Android SeekBar Minimum and Continuous Float Value Attributes

    - by fordays
    I'm looking for a way to implement a minimum value in my SeekBar and also have the option to increment decimal numbers. For example, currently my SeekBar's minimum is set to 0, but I need it to start at the value 0.2. Also, I would like to have the functionality to be able to have the user select a number from 0.2 to 10.0 at a .1 precision so they can choose the numbers 5.6 or 7.1. Here are the style attributes for my SeekBar: <style name="SeekBar"> <item name="android:paddingLeft">30dp</item> <item name="android:paddingRight">30dp</item> <item name="android:layout_width">fill_parent</item> <item name="android:layout_height">wrap_content</item> <item name="android:layout_marginTop">0dp</item> <item name="android:layout_marginBottom">0dp</item> <item name="android:max">10</item> <item name="android:progress">1</item> </style>

    Read the article

  • Pig_Cassandra integration caused - ERROR 1070: Could not resolve CassandraStorage using imports:

    - by Le Dude
    I'm following basic Pig, Cassandra, Hadoop installation. Everything works just fine as a stand alone. No error. However when I tried to run the example file provided by Pig_cassandra example, I got this error. [root@localhost pig]# /opt/cassandra/apache-cassandra-1.1.6/examples/pig/bin/pig_cassandra -x local -x local /opt/cassandra/apache-cassandra-1.1.6/examples/pig/example-script.pig Using /opt/pig/pig-0.10.0/pig-0.10.0-withouthadoop.jar. 2012-10-24 21:14:58,551 [main] INFO org.apache.pig.Main - Apache Pig version 0.10.0 (r1328203) compiled Apr 19 2012, 22:54:12 2012-10-24 21:14:58,552 [main] INFO org.apache.pig.Main - Logging error messages to: /opt/pig/pig_1351138498539.log 2012-10-24 21:14:59,004 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: file:/// 2012-10-24 21:14:59,472 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1070: Could not resolve CassandraStorage using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.] Details at logfile: /opt/pig/pig_1351138498539.log Here is the log file Pig Stack Trace --------------- ERROR 1070: Could not resolve CassandraStorage using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.] org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1000: Error during parsing. Could not resolve CassandraStorage using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.] at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1597) at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1540) at org.apache.pig.PigServer.registerQuery(PigServer.java:540) at org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:970) at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:386) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:189) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:165) at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84) at org.apache.pig.Main.run(Main.java:555) at org.apache.pig.Main.main(Main.java:111) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) Caused by: Failed to parse: Cannot instantiate: CassandraStorage at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:184) at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1589) ... 14 more Caused by: java.lang.RuntimeException: Cannot instantiate: CassandraStorage at org.apache.pig.impl.PigContext.instantiateFuncFromSpec(PigContext.java:510) at org.apache.pig.parser.LogicalPlanBuilder.validateFuncSpec(LogicalPlanBuilder.java:791) at org.apache.pig.parser.LogicalPlanBuilder.buildFuncSpec(LogicalPlanBuilder.java:780) at org.apache.pig.parser.LogicalPlanGenerator.func_clause(LogicalPlanGenerator.java:4583) at org.apache.pig.parser.LogicalPlanGenerator.load_clause(LogicalPlanGenerator.java:3115) at org.apache.pig.parser.LogicalPlanGenerator.op_clause(LogicalPlanGenerator.java:1291) at org.apache.pig.parser.LogicalPlanGenerator.general_statement(LogicalPlanGenerator.java:789) at org.apache.pig.parser.LogicalPlanGenerator.statement(LogicalPlanGenerator.java:507) at org.apache.pig.parser.LogicalPlanGenerator.query(LogicalPlanGenerator.java:382) at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:175) ... 15 more Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 1070: Could not resolve CassandraStorage using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.] at org.apache.pig.impl.PigContext.resolveClassName(PigContext.java:495) at org.apache.pig.impl.PigContext.instantiateFuncFromSpec(PigContext.java:507) ... 24 more ================================================================================ I googled around and got to this point from other stackoverflow user that identified the potential problem but not the solution. Cassandra and pig integration cause error during startup I believe my configuration is correct and the path has already been defined properly. I didn't change anything in the pig_cassandra file. I'm not quite sure how to proceed from here. Please help?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >