Search Results

Search found 22845 results on 914 pages for 'software deployment'.

Page 365/914 | < Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >

  • Working in Germany (Munich) [closed]

    - by adri
    My husband and I are relocating to Munich in August (He's German). I have been looking for jobs online since I heard that software developers are in great demand. And by the looks of it, seems to be correct. There are lots of offers on line but I have a problem, my german is not spectacular. I would rate it as basic. Good to be a tourist but not good to write anything formal, not even my cover letter. So I was wondering how hard is really going to be to find a job for me with virtually no german (I'm studying, but good german is not going to happen over night). Also, would it be possible to secure a job before arriving to Germany? I have nearly 10 years of working experience developing software, mostly in .net (c#, vb, winForms, asp.net) but I also know java. 3 years experience as a team leader for groups of up to 8 people. In the last couple of years I've been working mostly on digitalization of documents (such as birth certificates and the like) but I'm more than willing to try other fields Also I speak English, Spanish, Italian, a bit of portuguese and of course a bit of german.

    Read the article

  • Can I install applications to Remote Desktop Session Hosts via Group Policy?

    - by CC.
    I have a GPO that installs an application using the Software installation policy under Computer Configuration. I assign this GPO to the OU with our desktop/laptop computers, and my clients all install the software fine. I have another separate OU that covers our new Server 2012 RD session hosts. Previously, we've manually installed applications on our one Terminal Server. Now we have one Broker and two Session Hosts. I'd like to take my existing GPO, assign it to the session hosts, and have it install on the next reboot after a gpupdate so I'm sure that each is identically configured. Given this info: Should I be able to install applications via GPO to Session Hosts? Will Group Policy automatically install the applications as if I put the session host into /install mode, or do I need to do that?

    Read the article

  • How do I sell Oracle? It all starts with the database

    - by swiesner
    Partner sales reps often ask me how they start a conversation with their customers around Oracle. The first question should be, "are you running an Oracle database?" Much of what we do at Oracle is intended to optimize our customers' investments in the Oracle database. Many of our acquisitions, new features in existing products, new applications, are often designed to improve the experience of the 400,000 Oracle database customers worldwide. Once you find an Oracle database customer, the next set of questions are much easier, but depend on your expertise: License Management, Upsell, and Services. Let's start with License Management: Have there been any changes to your infrastructure since you purchased the database licenses? Have you upgraded the servers on which the Oracle database is running? Have the number of users or employees increased since the last license purchase? Yes to any of these questions will lead to investigating correct licensing. The goal is to provide a "soft" license review. Oracle generally does not require any license keys to install our software, so we need to help our customers with compliance. Correct licensing is essential to managing costs, and can provide a great way to efficiently manage IT spend. You might want to contact your local Oracle Channel Manager or VAD for licensing assistance. Or, review the Software Investment Guide. Naturally, these questions can lead to upsell opportunities. If your customer has invested in Oracle technology to manage their data, that data is essential to running their business. We'll take a look at those upsell questions in a later blog post.

    Read the article

  • AT91SAM7X512's SPI peripheral gets disabled on write to SPI_TDR

    - by Dor
    My AT91SAM7X512's SPI peripheral gets disabled on the X time (X varies) that I write to SPI_TDR. As a result, the processor hangs on the while loop that checks the TDRE flag in SPI_SR. This while loop is located in the function SPI_Write() that belongs to the software package/library provided by ATMEL. The problem occurs arbitrarily - sometimes everything works OK and sometimes it fails on repeated attempts (attemp = downloading the same binary to the MCU and running the program). Configurations are (defined in the order of writing): SPI_MR: MSTR = 1 PS = 0 PCSDEC = 0 PCS = 0111 DLYBCS = 0 SPI_CSR[3]: CPOL = 0 NCPHA = 1 CSAAT = 0 BITS = 0000 SCBR = 20 DLYBS = 0 DLYBCT = 0 SPI_CR: SPIEN = 1 After setting the configurations, the code verifies that the SPI is enabled, by checking the SPIENS flag. I perform a transmission of bytes as follows: const short int dataSize = 5; // Filling array with random data unsigned char data[dataSize] = {0xA5, 0x34, 0x12, 0x00, 0xFF}; short int i = 0; volatile unsigned short dummyRead; SetCS3(); // NPCS3 == PIOA15 while(i-- < dataSize) { mySPI_Write(data[i]); while((AT91C_BASE_SPI0->SPI_SR & AT91C_SPI_TXEMPTY) == 0); dummyRead = SPI_Read(); // SPI_Read() from Atmel's library } ClearCS3(); /**********************************/ void mySPI_Write(unsigned char data) { while ((AT91C_BASE_SPI0->SPI_SR & AT91C_SPI_TXEMPTY) == 0); AT91C_BASE_SPI0->SPI_TDR = data; while ((AT91C_BASE_SPI0->SPI_SR & AT91C_SPI_TDRE) == 0); // <-- This is where // the processor hangs, because that the SPI peripheral is disabled // (SPIENS equals 0), which makes TDRE equal to 0 forever. } Questions: What's causing the SPI peripheral to become disabled on the write to SPI_TDR? Should I un-comment the line in SPI_Write() that reads the SPI_RDR register? Means, the 4th line in the following code: (The 4th line is originally marked as a comment) void SPI_Write(AT91S_SPI *spi, unsigned int npcs, unsigned short data) { // Discard contents of RDR register //volatile unsigned int discard = spi->SPI_RDR; /* Send data */ while ((spi->SPI_SR & AT91C_SPI_TXEMPTY) == 0); spi->SPI_TDR = data | SPI_PCS(npcs); while ((spi->SPI_SR & AT91C_SPI_TDRE) == 0); } Is there something wrong with the code above that transmits 5 bytes of data? Please note: The NPCS line num. 3 is a GPIO line (means, in PIO mode), and is not controlled by the SPI controller. I'm controlling this line by myself in the code, by de/asserting the ChipSelect#3 (NPCS3) pin when needed. The reason that I'm doing so is because that problems occurred while trying to let the SPI controller to control this pin. I didn't reset the SPI peripheral twice, because that the errata tells to reset it twice only if I perform a reset - which I don't do. Quoting the errata: If a software reset (SWRST in the SPI Control Register) is performed, the SPI may not work properly (the clock is enabled before the chip select.) Problem Fix/Workaround The SPI Control Register field, SWRST (Software Reset) needs to be written twice to be cor- rectly set. I noticed that sometimes, if I put a delay before the write to the SPI_TDR register (in SPI_Write()), then the code works perfectly and the communications succeeds. Useful links: AT91SAM7X Series Preliminary.pdf ATMEL software package/library spi.c from Atmel's library spi.h from Atmel's library An example of initializing the SPI and performing a transfer of 5 bytes is highly appreciated and helpful.

    Read the article

  • Why does File::Slurp return a scalar when it should return a list?

    - by BrianH
    I am new to the File::Slurp module, and on my first test with it, it was not giving the results I was expecting. It took me a while to figure it out, so now I am interested in why I was seeing this certain behavior. My call to File::Slurp looked like this: my @array = read_file( $file ) || die "Cannot read $file\n"; I included the "die" part because I am used to doing that when opening files. My @array would always end up with the entire contents of the file in the first element of the array. Finally I took out the "|| die" section, and it started working as I expected. Here is an example to illustrate: perl -de0 Loading DB routines from perl5db.pl version 1.22 Editor support available. Enter h or `h h' for help, or `man perldebug' for more help. main::(-e:1): 0 DB<1> use File::Slurp DB<2> $file = '/usr/java6_64/copyright' DB<3> x @array1 = read_file( $file ) 0 'Licensed material - Property of IBM.' 1 'IBM(R) SDK, Java(TM) Technology Edition, Version 6' 2 'IBM(R) Runtime Environment, Java(TM) Technology Edition, Version 6' 3 '' 4 'Copyright Sun Microsystems Inc, 1992, 2008. All rights reserved.' 5 'Copyright IBM Corporation, 1998, 2009. All rights reserved.' 6 '' 7 'The Apache Software License, Version 1.1 and Version 2.0' 8 'Copyright 1999-2007 The Apache Software Foundation. All rights reserved.' 9 '' 10 'Other copyright acknowledgements can be found in the Notices file.' 11 '' 12 'The Java technology is owned and exclusively licensed by Sun Microsystems Inc.' 13 'Java and all Java-based trademarks and logos are trademarks or registered' 14 'trademarks of Sun Microsystems Inc. in the United States and other countries.' 15 '' 16 'US Govt Users Restricted Rights - Use duplication or disclosure' 17 'restricted by GSA ADP Schedule Contract with IBM Corp.' DB<4> x @array2 = read_file( $file ) || die "Cannot read $file\n"; 0 'Licensed material - Property of IBM. IBM(R) SDK, Java(TM) Technology Edition, Version 6 IBM(R) Runtime Environment, Java(TM) Technology Edition, Version 6 Copyright Sun Microsystems Inc, 1992, 2008. All rights reserved. Copyright IBM Corporation, 1998, 2009. All rights reserved. The Apache Software License, Version 1.1 and Version 2.0 Copyright 1999-2007 The Apache Software Foundation. All rights reserved. Other copyright acknowledgements can be found in the Notices file. The Java technology is owned and exclusively licensed by Sun Microsystems Inc. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems Inc. in the United States and other countries. US Govt Users Restricted Rights - Use duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. ' Why does the || die make a difference? I have a feeling this might be more of a Perl precedence question instead of a File::Slurp question. I looked in the File::Slurp module and it looks like it is set to croak if there is a problem, so I guess the proper way to do it is to allow File::Slurp to croak for you. Now I'm just curious why I was seeing these differences.

    Read the article

  • Slicing a time range into parts

    - by beporter
    First question. Be gentle. I'm working on software that tracks technicians' time spent working on tasks. The software needs to be enhanced to recognize different billable rate multipliers based on the day of the week and the time of day. (For example, "Time and a half after 5 PM on weekdays.") The tech using the software is only required to log the date, his start time and his stop time (in hours and minutes). The software is expected to break the time entry into parts at the boundaries of when the rate multipliers change. A single time entry is not permitted to span multiple days. Here is a partial sample of the rate table. The first-level array keys are the days of the week, obviously. The second-level array keys represent the time of the day when the new multiplier kicks in, and runs until the next sequential entry in the array. The array values are the multiplier for that time range. [rateTable] => Array ( [Monday] => Array ( [00:00:00] => 1.5 [08:00:00] => 1 [17:00:00] => 1.5 [23:59:59] => 1 ) [Tuesday] => Array ( [00:00:00] => 1.5 [08:00:00] => 1 [17:00:00] => 1.5 [23:59:59] => 1 ) ... ) In plain English, this represents a time-and-a-half rate from midnight to 8 am, regular rate from 8 to 5 pm, and time-and-a-half again from 5 till 11:59 pm. The time that these breaks occur may be arbitrary to the second and there can be an arbitrary number of them for each day. (This format is entirely negotiable, but my goal is to make it as easily human-readable as possible.) As an example: a time entry logged on Monday from 15:00:00 (3 PM) to 21:00:00 (9 PM) would consist of 2 hours billed at 1x and 4 hours billed at 1.5x. It is also possible for a single time entry to span multiple breaks. Using the example rateTable above, a time entry from 6 AM to 9 PM would have 3 sub-ranges from 6-8 AM @ 1.5x, 8AM-5PM @ 1x, and 5-9 PM @ 1.5x. By contrast, it's also possible that a time entry may only be from 08:15:00 to 08:30:00 and be entirely encompassed in the range of a single multiplier. I could really use some help coding up some PHP (or at least devising an algorithm) that can take a day of the week, a start time and a stop time and parse into into the required subparts. It would be ideal to have the output be an array that consists of multiple entries for a (start,stop,multiplier) triplet. For the above example, the output would be: [output] => Array ( [0] => Array ( [start] => 15:00:00 [stop] => 17:00:00 [multiplier] => 1 ) [1] => Array ( [start] => 17:00:00 [stop] => 21:00:00 [multiplier] => 1.5 ) ) I just plain can't wrap my head around the logic of splitting a single (start,stop) into (potentially) multiple subparts.

    Read the article

  • What is wrong with my SQL syntax here?

    - by CT
    I'm trying to create a IT asset database with a web front end. I've gathered some data from forms using POST as well as one variable that had already written to a cookie. This is the first time I have tried to enter the data into the database. Here is the code: <?php //get data $id = $_POST['id']; $company = $_POST['company']; $location = $_POST['location']; $purchase_date = $_POST['purchase_date']; $purchase_order = $_POST['purchase_order']; $value = $_POST['value']; $type = $_COOKIE["type"]; $notes = $_POST['notes']; $manufacturer = $_POST['manufacturer']; $model = $_POST['model']; $warranty = $_POST['warranty']; //set cookies setcookie('id', $id); setcookie('company', $company); setcookie('location', $location); setcookie('purchase_date', $purchase_date); setcookie('purchase_order', $purchase_order); setcookie('value', $value); setcookie('type', $type); setcookie('notes', $notes); setcookie('manufacturer', $manufacturer); setcookie('model', $model); setcookie('warranty', $warranty); //checkdata //start database interactions // connect to mysql server and database "asset_db" mysql_connect("localhost", "asset_db", "asset_db") or die(mysql_error()); mysql_select_db("asset_db") or die(mysql_error()); // Insert a row of information into the table "asset" mysql_query("INSERT INTO asset (id, company, location, purchase_date, purchase_order, value, type, notes) VALUES('$id', '$company', '$location', '$purchase_date', $purchase_order', '$value', '$type', '$notes') ") or die(mysql_error()); echo "Asset Added"; // Insert a row of information into the table "server" mysql_query("INSERT INTO server (id, manufacturer, model, warranty) VALUES('$id', '$manufacturer', '$model', '$warranty') ") or die(mysql_error()); echo "Server Added"; //destination url //header("Location: verify_submit_server.php"); ?> The error I get is: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '', '678 ', 'Server', '789')' at line 2 That data is just test data I was trying to throw in there, but it looks to be the at the $value, $type, $notes. Here are the table create statements if they help: <?php // connect to mysql server and database "asset_db" mysql_connect("localhost", "asset_db", "asset_db") or die(mysql_error()); mysql_select_db("asset_db") or die(mysql_error()); // create asset table mysql_query("CREATE TABLE asset( id VARCHAR(50) PRIMARY KEY, company VARCHAR(50), location VARCHAR(50), purchase_date VARCHAR(50), purchase_order VARCHAR(50), value VARCHAR(50), type VARCHAR(50), notes VARCHAR(200))") or die(mysql_error()); echo "Asset Table Created.</br />"; // create software table mysql_query("CREATE TABLE software( id VARCHAR(50) PRIMARY KEY, software VARCHAR(50), license VARCHAR(50))") or die(mysql_error()); echo "Software Table Created.</br />"; // create laptop table mysql_query("CREATE TABLE laptop( id VARCHAR(50) PRIMARY KEY, manufacturer VARCHAR(50), model VARCHAR(50), serial_number VARCHAR(50), esc VARCHAR(50), user VARCHAR(50), prev_user VARCHAR(50), warranty VARCHAR(50))") or die(mysql_error()); echo "Laptop Table Created.</br />"; // create desktop table mysql_query("CREATE TABLE desktop( id VARCHAR(50) PRIMARY KEY, manufacturer VARCHAR(50), model VARCHAR(50), serial_number VARCHAR(50), esc VARCHAR(50), user VARCHAR(50), prev_user VARCHAR(50), warranty VARCHAR(50))") or die(mysql_error()); echo "Desktop Table Created.</br />"; // create server table mysql_query("CREATE TABLE server( id VARCHAR(50) PRIMARY KEY, manufacturer VARCHAR(50), model VARCHAR(50), warranty VARCHAR(50))") or die(mysql_error()); echo "Server Table Created.</br />"; ?> Running a standard LAMP stack on Ubuntu 10.04. Thank you.

    Read the article

  • Agile: User Stories for Machine Learning Project?

    - by benjismith
    I've just finished up with a prototype implementation of a supervised learning algorithm, automatically assigning categorical tags to all the items in our company database (roughly 5 million items). The results look good, and I've been given the go-ahead to plan the production implementation project. I've done this kind of work before, so I know how the functional components of the software. I need a collection of web crawlers to fetch data. I need to extract features from the crawled documents. Those documents need to be segregated into a "training set" and a "classification set", and feature-vectors need to be extracted from each document. Those feature vectors are self-organized into clusters, and the clusters are passed through a series of rebalancing operations. Etc etc etc etc. So I put together a plan, with about 30 unique development/deployment tasks, each with time estimates. The first stage of development -- ignoring some advanced features that we'd like to have in the long-term, but aren't high enough priority to make it into the development schedule yet -- is slated for about two months worth of work. (Keep in mind that I already have a working prototype, so the final implementation is significantly simpler than if the project was starting from scratch.) My manager said the plan looked good to him, but he asked if I could reorganize the tasks into user stories, for a few reasons: (1) our project management software is totally organized around user stories; (2) all of our scheduling is based on fitting entire user stories into sprints, rather than individually scheduling tasks; (3) other teams -- like the web developers -- have made great use of agile methodologies, and they've benefited from modelling all the software features as user stories. So I created a user story at the top level of the project: As a user of the system, I want to search for items by category, so that I can easily find the most relevant items within a huge, complex database. Or maybe a better top-level story for this feature would be: As a content editor, I want to automatically create categorical designations for the items in our database, so that customers can easily find high-value data within our huge, complex database. But that's not the real problem. The tricky part, for me, is figuring out how to create subordinate user stories for the rest of the machine learning architecture. Case in point... I know that the algorithm requires two major architectural subdivisions: (A) training, and (B) classification. And I know that the training portion of the architecture requires construction of a cluster-space. All the Agile Development literature I've read seems to indicate that a user story should be the "smallest possible implementation that provides any business value". And that makes a lot of sense when designing a piece of end-user software. Start small, and then incrementally add value when users demand additional functionality. But a cluster-space, in and of itself, provides zero business value. Nor does a crawler, or a feature-extractor. There's no business value (not for the end-user, or for any of the roles internal to the company) in a partial system. A trained cluster-space is only possible with the crawler and feature extractor, and only relevant if we also develop an accompanying classifier. I suppose it would be possible to create user stories where the subordinate components of the system act as the users in the stories: As a supervised-learning cluster-space construction routine, I want to consume data from a feature extractor, so that I can exist. But that seems really weird. What benefit does it provide me as the developer (or our users, or any other stakeholders, for that matter) to model my user stories like that? Although the main story can be easily divided along architectural-component boundaries (crawler, trainer, classifier, etc), I can't think of any useful decomposition from a user's perspective. What do you guys think? How do you plan Agile user stories for sophisticated, indivisible, non-user-facing components?

    Read the article

  • How can I get TFS2010 to run MSDEPLOY for me through MSBUILD?

    - by Simon_Weaver
    There is an excellent PDC talk available here which describes the new MSDEPLOY features in Visual Studio 2010 - as well as how to deploy an application within TFS. You can use MSBUILD within TFS2010 to call through to MSDEPLOY to deploy your package to IIS. This is done by means of parameters to MSBUILD. The talk explains some of the command line parameters such as : /p:DeployOnBuild /p:DeployTarget=MsDeployPublish /p:CreatePackageOnPublish=True /p:MSDeployPublishMethod=InProc /p:MSDeployServiceURL=localhost /p:DeployIISAppPath="Default Web Site" But where is the documentation for this - I can't find any? I've been spending all day trying to get this to work and can't quite get it right and keep ending up with various errors. If I run the package's cmd file it deploys perfectly. But I want to get the whole deployment running through msbuild using these arguments and not a separate call to msdeploy or running the package .cmd file. How can I do this? PS. Yes I do have the Web Deployment Agent Service running. I also have the management service running under IIS. I've tried using both. Args I'm using : /p:DeployOnBuild=True /p:DeployTarget=MsDeployPublish /p:Configuration=Release /p:CreatePackageOnPublish=True /p:DeployIisAppPath=staging.example.com /p:MsDeployServiceUrl=https://staging.example.com:8172/msdeploy.axd /p:AllowUntrustedCertificate=True giving me : C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets (2660): VsMsdeploy failed.(Remote agent (URL https://staging.example.com:8172/msdeploy.axd?site=staging.example.com) could not be contacted. Make sure the remote agent service is installed and started on the target computer.) Error detail: Remote agent (URL https://staging.example.com:8172/msdeploy.axd?site=staging.example.com) could not be contacted. Make sure the remote agent service is installed and started on the target computer. An unsupported response was received. The response header 'MSDeploy.Response' was '' but 'v1' was expected. The remote server returned an error: (401) Unauthorized.

    Read the article

  • Deploy ASP.NET MVC 2 to IIS 7.5 targeting .NET 3.5

    - by Agent_9191
    I created an ASP.NET MVC 2 application in Visual Studio 2008. I set the release build to go through the ASP.NET compiler to precompile all the views, minify Javascript and CSS, clean up the web.config, etc. Since the production deployment is going to an IIS6 server, I set up my pseudo-production deployment on my Windows 7 machine to have the application pool run in classic mode targeting the 2.0 runtime. I set up the extensionless handler in the web.config that's necessary and everything worked great. The problem came when I upgraded the solution to Visual Studio 2010. I'm still targeting the 3.5 framework, but now I'm using MSBuild 4.0 since that's what Visual Studio 2010 uses. Everything still compiles correctly because it runs fine under Cassini, but when I deploy it to the same location (same application pool, identity, etc) it now behaves differently. I still have the extensionless handler in the web.config, but now when I navigate to the root of the application it does directory browsing, and any routes that it had previously handled now come back as 404 errors being handled by the StaticFile handler in IIS. I'm at a loss for what changed and is causing the break. I have looked at this question, but I have already verified that all the prerequisite components are installed.

    Read the article

  • Hibernate 3.5-Final in JBoss 5.1.0.GA

    - by Bozhidar Batsov
    Hibernate 3.5-Final is finally here and it offers the much anticipated JPA2 support, amongst other features. I am working on a project(EJB3 based) using JBoss 5.1.0.GA and Hibernate 3.3, but I wanted to take advantage of the JPA2 and tried to upgrade to Hibernate 3.5. What I did was fairly simple and standard - I just put all the hibernate 3.5 jars in the server/configuration(default,all,etc)/lib folder - that way they take precedence over the hibernate artifacts shipped with JBoss. It seems though that JBoss ships with libraries that are dependent on the JPA1 implementation part of the hibernate 3.3, because I started getting some errors about unimplemented abstract methods and stuff like that on deploy: 23:21:26,792 WARN [Ejb3Configuration] Persistence provider caller does not implement the EJB3 spec correctly. PersistenceUnitInfo.getNewTempClassLoader() is null. 23:21:26,792 ERROR [AbstractKernelController] Error installing to Start: name=persistence.unit:unitName=kernel-ear-3.3.0-SNAPSHOT.ear/config-persistence.jar#ConfigurationPersistenceUnit state=Create java.lang.AbstractMethodError: org.jboss.jpa.deployment.PersistenceUnitInfoImpl.getValidationMode()Ljavax/persistence/ValidationMode; at org.hibernate.ejb.Ejb3Configuration.configure(Ejb3Configuration.java:613) at org.hibernate.ejb.HibernatePersistence.createContainerEntityManagerFactory(HibernatePersistence.java:72) at org.jboss.jpa.deployment.PersistenceUnitDeployment.start(PersistenceUnitDeployment.java:301) at sun.reflect.GeneratedMethodAccessor308.invoke(Unknown Source) Maybe I should use a different persistence provided? Currently it's: org.hibernate.ejb.HibernatePersistence I looked around the net and didn't find any documented upgrade paths. There was even an unanswered question here in stack overflow on the topic. Any ideas, suggestions? Thanks in advance for your help.

    Read the article

  • Jboss Error-Cannot process metadata

    - by Nila
    Hi! I'm trying to implement stateless session bean ejb3 in jboss5 using netbeans6.8 as a editor. When I tried deploying my application, I'm getting the following error. What is the issue with this? 17:45:04,901 ERROR [AbstractKernelController] Error installing to PostClassLoader: name=vfszip:/E:/Shalini/jboss-5.1.0.GA/server/default/deploy/InsighIT1.1-ejb.jar/ state=ClassLoader mode=Manual requiredState=PostClassLoader org.jboss.deployers.spi.DeploymentException: Cannot process metadata at org.jboss.deployers.spi.DeploymentException.rethrowAsDeploymentException(DeploymentException.java:49) at org.jboss.deployment.AnnotationMetaDataDeployer.deploy(AnnotationMetaDataDeployer.java:181) at org.jboss.deployment.AnnotationMetaDataDeployer.deploy(AnnotationMetaDataDeployer.java:93) at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(DeployerWrapper.java:171) at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(DeployersImpl.java:1439) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1157) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1210) at org.jboss.deployers.plugins.deployers.DeployersImpl.install(DeployersImpl.java:1098) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.deployers.plugins.deployers.DeployersImpl.process(DeployersImpl.java:781) at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeployerImpl.java:702) at org.jboss.system.server.profileservice.repository.MainDeployerAdapter.process(MainDeployerAdapter.java:117) at org.jboss.system.server.profileservice.hotdeploy.HDScanner.scan(HDScanner.java:362) at org.jboss.system.server.profileservice.hotdeploy.HDScanner.run(HDScanner.java:255) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907) at java.lang.Thread.run(Thread.java:619) Caused by: java.lang.ClassNotFoundException: tomcat.Main from BaseClassLoader@1d6d136{VFSClassLoaderPolicy@41312b{name=vfszip:/E:/hh/jboss-5.1.0.GA/server/default/deploy/InsighIT1.1-ejb.jar/

    Read the article

  • MSBuild 2010 - how to publish web app to a specific location (nant)?

    - by Mr. Flibble
    I'm trying to get MSBuild 2010 to publish a web app to a specific location. I can get it to publish the deployment package to a particular path, but the deployment package then adds it's own path that changes. For example: if I tell it to publish to C:\dev\build\Output\Debug then the actual web files end up at C:\dev\build\Output\Debug\Archive\Content\C_C\code\sawadee\frontend\IPP-FrontEnd\Source\ControllersViews\obj\Debug\Package\PackageTmp And the C_C part of the path changes (not sure how it chooses this part of the path). This means I can't just script a copy from the publish location. I'm using this nant/msbuild command at the moment: <target name="compile" description="Compiles"> <msbuild project="${name}.sln"> <property name="Platform" value="Any CPU"/> <property name="Configuration" value="Debug"/> <property name="DeployOnBuild" value="true"/> <property name="DeployTarget" value="Package"/> <property name="PackageLocation" value="C:\dev\build\Output\Debug\"/> <property name="AutoParameterizationWebConfigConnectionStrings" value="false"/> <property name="PackageAsSingleFile" value="false"/> </msbuild> Any ideas on how to get it to send the web files directly to a specific location?

    Read the article

  • Understanding the Silverlight Dispatcher

    - by Matt
    I had a Invalid Cross Thread access issue, but a little research and I managed to fix it by using the Dispatcher. Now in my app I have objects with lazy loading. I'd make an Async call using WCF and as usual I use the Dispatcher to update my objects DataContext, however it didn't work for this scenario. I did however find a solution here. Here's what I don't understand. In my UserControl I have code to call an Toggle method on my object. The call to this method is within a Dispatcher like so. Dispatcher.BeginInvoke( () => _CurrentPin.ToggleInfoPanel() ); As I mentioned before this was not enough to satisfy Silverlight. I had to make another Dispatcher call within my object. My object is NOT a UIElement, but a simple class that handles all its own loading/saving. So the problem was fixed by calling Deployment.Current.Dispatcher.BeginInvoke( () => dataContext.Detail = detail ); within my class. Why did I have to call the Dispatcher twice to achieve this? Shouldn't a high-level call be enough? Is there a difference between the Deployment.Current.Dispatcher and the Dispatcher in a UIElement?

    Read the article

  • Bamboo to Build Specific SVN Revision

    - by Anton Gogolev
    Hi! Imagine there's a project in Bamboo with two build plans: Staging Deployment (SD) and Production Deployment (PD). Building SD checks out latest sources, builds them and deploys a web site to a staging server. Currently, PD does all the same, namely deploys the latest version of a web site to a production server. Clearly, this is not very good: I want to be able to deploy the same exact version of a web site that was previously deployed on a staging server, not the latest one. To illustrate: suppose we're at r101 in SVN repo. Clicking "Build SD" will deploy a web site version, say, 2.1.0.101 to staging server. Now we commit a breaking change and end up at r102. Now I want to deploy to a production server. If I hit "Build PD", Bamboo will happily check out r102 and build it, resulting in version 2.1.0.102 being deployed to a production server. What I want it to do, however, is to build and deploy a version which was previously built in an SD plan (that is, 2.1.0.101). Of course I can make SD plan to tag latest-successful build as tags/builds/latest, but I would rather have Bamboo itself handle that.

    Read the article

  • Java EE suitablity for a social network using Cassandra datastore ??

    - by Marcos
    We are in the process of making some important technology decisions for a social networking application. We're planning to have Cassandra(a NoSQL database to support efficient data storage). We would be using Hector(a Java client) to interact with Cassandra. 1.) Would Java EE be a good choice over PHP for a social networking application in terms of performance, scalabilty & complexities? 2.) Another possible implementation strategy, Is it suitable to have backend alone in Java and rest in PHP? 3.) What differences(as compared to PHP) it makes in terms of costs at various stages of application development, deployment and maintenance ? 4.) What are the things to keep in mind as we move along with Java development& deployment(as we are relatively new to the Java background) ? 5.) If you could list some major production deployments of similar type(social network) applications in Java. Thank you!

    Read the article

  • TFS Disk Structure - and "Add new folder" vs "Add solution"

    - by NealWalters
    Our organization recently got TFS 2008 set up ready for our use. I have a practice TeamProject available to play with. To simplify slightly, we previous organized our code on disk like this: -EC - Main - Database - someScript1.sql - someScript2.sql - Documents - ReleaseNotes_V1.doc - Source - Common - Company.EC.Common.Biztalk.Artifacts [folder] - Company.EC.Common.BizTalk.Components [folder] - Company.EC.Common.Biztalk.Deployment [folder] - Company.EC.BookTransfer.BizTalk.sln - BookTransfer - Company.EC.BookTransfer.BizTalk.Artifacts [folder] - Company.EC.BookTransfer.BizTalk.Components [folder] - Company.EC.BookTransfer.BizTalk.Components.UnitTest [folder] - Company.EC.BookTransfer.BizTalk.Deployment [folder] - Company.EC.BookTransfer.BizTalk.sln I'm trying to decide, do I want to check in the entire c:\EC directory? Or do I want to open each solution and checkin. What are the pros and cons of each? It seems like by doing the "Add Files/Folder" option, I could check in everything at once and it would match the disk structure. It also looks like that if I check in each solution separately, that creates another working folder in my Workspace. I think if I check in by "add files/folder", I will have one workspace and that would be better. But most of the books and samples I see talk about checking in projects and solutions. P.S. I know I need to add more to my disk structure in accordance with the Branch/Merge guidelines, but that is not the question I'm asking here. Thanks, Neal Walters

    Read the article

  • What is a good embeddable Java LDAP server?

    - by LeedsSideStreets
    I'm working on a Java web application that integrates with a few other external applications that are deployed along with it. Authentication information must be synchronized across everything and the other applications want to authenticate against LDAP. The application will be deployed in environments where there will be no other LDAP server for everything to use; I have to provide it. My solution so far has been to use Penrose Server as a standalone app, which I set up to examine tables in the main application's database and publish LDAP based on that. It works well, but it would be nice to have something that can be embedded into the main application itself to simplify deployment. It looks like Penrose can be embedded, but the documentation can be a bit spotty or out-of-date (though it seems to be actively developed). It could be an acceptable solution, but if there is another out there that is known to work well in an embedded configuration I might want to check it out. I'm also concerned about GPL issues with Penrose. I'm not at liberty to GPL the source code for the application. I don't believe it was an issue running it standalone, but embedding it may be no-no... anybody know for sure? A permissive license would be good in order to avoid these issues. Requirements: LDAP v3 Must be able to be have the directory contents updated while running, either programmatically or by another means like syncing with the database as Penrose does Easy to configure (no additional configuration for the app at deployment time would be ideal) So far I've briefly looked at ApacheDS and OpenDS which seem to be embeddable. Does anyone have experience with this kind of thing?

    Read the article

  • Portlet container like pluto or jetspeed on google app engine?

    - by Patrick Cornelissen
    I am trying to build something "portlet server"-ish on the google app engine. (as open source) I'd like to use the JSR168/286 standards, but I think that the restrictions of the app engine will make it somewhere between tricky and impossible. Has anyone tried to run jetspeed or an application that uses pluto internally on the google app engine? Based on my current knowledge of portlets and the google app engine I'm anticipating these problems: A war file with portlets is from the deployment standpoint more or less a complete webapp (yes, I know that it doesn't really work without a portal server). The war file may contain it's own web.xml etc. This makes deployment on the app engine rather difficult, because the apps are not visible to each other, so all portlet containing archives need to be included in the war file of the deployed "app engine based portal server". The "portlets" are (at least in liferay) started as permanent servlet processes, based on their portlet.xmls and web.xmls which is located in the same spot for every portlet archive that is loaded. I think this may be problematic in the app engine, because everything is in one big "web app", so it may be tricky to access the portlet.xmls from each archive. This prevents a 100% compatibility in my opinion. Is here anyone who has any experience with the combination of portlets and the app engine? Do you think it's feasible to modify jetspeed, pluto or any other portlet container to be able to run it on the app engine?

    Read the article

  • RDP through TCP Proxy

    - by johng100
    Hi, First time in Stackoverflow and I'm hoping someone can help me. I'm looking at a proof of concept to pass RDP traffic through a TCP Proxy/tunnel which will pass through firewalls using HTTPS. The problem has to do with deploying images to machines and so it can't be assumed that the .NET framework will be present, so C++ is being used at the deployment end of a connection. The basic system I have at present is a program which listens for client connections on a port then passes any data to a WCF service which stores it as a byte array. A deployment machine (using GSoap and C++) polls the WCF service for messages and if it finds them then passes the data onto the target server process via sockets. I know this sounds horrible, but it works for simple test clients and server passing data to and from simple test client and server programs via this WCF/C++/C# proxy layer. But I have to support traffic from RDP, VNC and possibly others, so I need a transparent proxy to do this and am wondering whether the above approach is worth pursuing. I've read up on SSH tunneling and that seems a possibility. My basic question is is it possible to tunnel RDP traffic over HTTPS using custom code. Thanks John

    Read the article

  • application packaging

    - by user285825
    The application packaging (software deliverables) is a vague concept to me. I worked in web application before so all I know how to prepare the deliverables ie WAR and then put them into the servlet container and rest of this being taken care of. But when considering other java or c++-based application (.exe or .class) how to prepare the package? What should be considered for preparation of the package and deployment of the application? Like I understand there should be some entries made into the windows registry or some libraries to be put in /usr/bin directory and so on. And during the development phase the execution and testing is done in more informal manner like writing some shell script or so on. But the deployment scenario is a more formal thing. As I said I have only the idea of deploying web applications I am not much knowledgeable in the other areas. Are there any kind of conventions like there are conventions for documentations like javadoc or doxygen/man pages? Application packaging is not that sort of a thing that there are much discussion on books or online materials. This is very disappointing.

    Read the article

  • Liferay: Customise the web.xml HeaderFilter added during portlet deloyment

    - by gid
    I need to customise the deployment of my liferay portlet such that the GWT nocache.js files don't get a 'Expires' HTTP header set. My war file looks like this: view.jsp com.foobar.MyEntryPoint/com.foobar.MyEntryPoint.nocache.js com.foobar.MyEntryPoint/12312312313213123123123.cache.html WEB-INF/web.xml WEB-INF/portlet.xml WEB-INF/liferay-portlet.xml ... etc my web.xml is pretty much empty (only has the displayName) On deployment this is rewritten my liferay to have a series of filters in particalar: Header Filter com.liferay.portal.kernel.servlet.PortalClassLoaderFilter filter-class com.liferay.portal.servlet.filters.header.HeaderFilter Cache-Control max-age=315360000, public Expires 315360000 Header Filter *.js This filter adds an Expires header for about 2020 to the .nocache.js js files... the trouble is these files really shouldn't be cached (the hint is in the name :) For development purposes I have worked around this by disabling the filter using: com.liferay.portal.servlet.filters.header.HeaderFilter=false in portal-ext.properties globaly. What I what I would like to to is one of the following: Disable HeaderFilter only for this portlet or war file. I can always add my own expires Add an init-param to the HeaderFilter to match anything other than .nocache.js files Any ideas how either of these things could be achieved? Stack: liferay-6.0.1 CE, Windows 7, java 1.6.0_18, GWT 2.0.3

    Read the article

  • Problem with GWT behind a reverse proxy - either nginx or apache

    - by Don Branson
    I'm having this problem with GWT when it's behind a reverse proxy. The backend app is deployed within a context - let's call it /context. The GWT app works fine when I hit it directly: http://host:8080/context/ I can configure a reverse proxy in front it it. Here's my nginx example: upstream backend { server 127.0.0.1:8080; } ... location / { proxy_pass http://backend/context/; } But, when I run through the reverse proxy, GWT gets confused, saying: 2009-10-04 14:05:41.140:/:WARN: Login: ERROR: The serialization policy file '/C7F5ECA5E3C10B453290DE47D3BE0F0E.gwt.rpc' was not found; did you forget to include it in this deployment? 2009-10-04 14:05:41.140:/:WARN: Login: WARNING: Failed to get the SerializationPolicy 'C7F5ECA5E3C10B453290DE47D3BE0F0E' for module 'https://hostname:444/'; a legacy, 1.3.3 compatible, serialization policy will be used. You may experience SerializationExceptions as a result. 2009-10-04 14:05:41.292:/:WARN: StoryService: ERROR: The serialization policy file '/0445C2D48AEF2FB8CB70C4D4A7849D88.gwt.rpc' was not found; did you forget to include it in this deployment? 2009-10-04 14:05:41.292:/:WARN: StoryService: WARNING: Failed to get the SerializationPolicy '0445C2D48AEF2FB8CB70C4D4A7849D88' for module 'https://hostname:444/'; a legacy, 1.3.3 compatible, serialization policy will be used. You may experience SerializationExceptions as a result. In other words, GWT isn't getting the word that it needs to prepend /context/ hen look for C7F5ECA5E3C10B453290DE47D3BE0F0E.gwt.rpc, but only when the request comes throught proxy. A workaround is to add the context to the url for the web site: location /context/ { proxy_pass http://backend/context/; } but that means the context is now part of the url that the user sees, and that's ugly. Anybody know how to make GWT happy in this case? Software versions: GWT - 1.7.0 (same problem with 1.7.1) Jetty - 6.1.21 (but the same problem existed under tomcat) nginx - 0.7.62 (same problem under apache 2.x) I've looked at the traffic between the proxy and the backend using DonsProxy, but there's nothing noteworthy there.

    Read the article

  • How to properly develop and deploy features for existing asp.net applications on IIS

    - by Tomh
    My question actually consists of multiple questions. I'm frequently reading about companies who deploy a small subset of features for a select amount of customers using the live "database". Ruby on Rails and its ecosystem have deployment tools and database migrations to deploy or rollback such features in a live production or staging environment. My question, how is this done for an asp.net (mvc in particular) application? How do you test your newly released features against live data? Do you have any tools to modify the existing database and roll back changes if necessary? Do you make backups before deployment? Update Maybe I should point out that my question is not really clear, getting more answers here will help me phrase the question better. To make it easier I will describe a situation I'm commonly seeing with some of my clients. My clients have large deployments of popular web applications. They do not have staging/QA/testing servers. (yes this is not optimal). The data their apps consist of are images, xml files, user uploads and data in Sql Server. Having a few records, of their production database and a couple of dummy files is not a substitute of testing against real data in my opinion. How would you design a workflow that can create a acceptable environment to mimic a production environment before going live?

    Read the article

  • Silverlight Vs. WPF Vs. Winforms What is good for specifically my purpose?

    - by Cyril Gupta
    I am about to start a new Windows applications and the contenders for the platform are: Windows Forms WPF Silverlight Now my experience with WPF at least in my last application was not very encouraging (the app failed to run on the deployment machines and I had to re-do it in Winforms). So my confidence is shaken here. My app is for mass-distribution (the last version had some 100,000+ installations). So I want to make absolutely sure that my users will be able to use it and enjoy it without any problems. I would love to create a nice interface, going the next step like a Flex or Silverlight, iPhone app, with animations and effects. So I would really like to go with WPF or Silverlight if I can. My needs are Good support for visuals and animation effects. Support for database connectivity. Support for printing (Is there an equivalent of PrintDocument in Silverlight) Must not suffer from deployment troubles. Silverlight is universal, but does it have printing support and good controls toolset? WPF has printing support and a nice toolset, but can I depend on it? Winforms is dated already and is not so impressive, but should I go with it anyway? Your advice would be appreciated

    Read the article

< Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >