Search Results

Search found 24383 results on 976 pages for 'configuration testing'.

Page 60/976 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • QA & Testing with UPK

    - by dan.gallo(at)oracle.com
    Most customers know that UPK produces both the word and excel based test scripts for UAT. Did you know that you can use UPK for QA review and bug tracking? To use UPK for QA, create content and assign it appropriately to authorized reviewers. Then have them open the developer, use customized views to find content assigned to them quickly and check out the topics. Then they can use the topic editor to review the content and provide comments right into the bubbles or use explanation frames. It makes QA-ing content this way easier than publishing and sending out .tpcs or docs for people to review. How about UPK for bug tracking? The hardest part about fixing bugs in software is reproducing the error! When you use UPK for bug tracking, it captures the exact steps the user took that gave them the error. Now development can easily walk through the process in a simulated environment to see what might have caused it, they have a documented procedure for what generated the error and they are able to better communicate with the LOB. Also, they can update or attach the simulation\documentation to any defect management software like bugzilla or something similar -all thanks to UPK.

    Read the article

  • SQLAuthority News – Download Whitepaper – Power View Infrastructure Configuration and Installation: Step-by-Step and Scripts

    - by pinaldave
    Power View, a feature of SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint Server 2010 Enterprise Edition, is an interactive data exploration, visualization, and presentation experience. It provides intuitive ad-hoc reporting for business users such as data analysts, business decision makers, and information workers. Microsoft has recently released very interesting whitepaper which covers a sample scenario that validates the connectivity of the Power View reports to both PowerPivot workbooks and tabular models. This white paper talks about following important concepts about Power View: Understanding the hardware and software requirements and their download locations Installing and configuring the required infrastructure when Power View and its data models are on the same computer and on different computer Installing and configuring a computer used for client access to Power View reports, models, Sharepoint 2012 and Power View in a workgroup Configuring single sign-on access for double-hop scenarios with and without Kerberos You can download the whitepaper from here. This whitepaper talks about many interesting scenarios. It would be really interesting to know if you are using Power View in your production environment. If yes, would you please share your experience over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology

    Read the article

  • How do I explain the importance of NUNIT Test cases to my Colleagues [duplicate]

    - by JNL
    This question already has an answer here: How to explain the value of unit testing 6 answers I am currently working in Software Development for applications including lot of Mathematical Calculations. As a result there are lot of test cases that we need to consider. We donot have any NUNIT Test case system, I am wonderring how should I get the advantages of implementing the NUNIT testing in front of my colleagues and my boss. I am pretty sure, it would be of great help for our team. Any help regarding the same, will be higly appreciated.

    Read the article

  • Ubuntu-installer fails preseed configuration file

    - by user76171
    I try to install Ubuntu 12.04 over network unattended. I installed a DHCP server (Dnsmasq), a TFTP server (tftpd-hpa), I got the netboot.tar.gz archive with the pxelinux.0 file, the pxelinux.cfg directory, the linux kernel and the initrd.gz image and I put a preseed file into my web server. Dnsmasq, tftpd-hpa, pxelinux and Apache are all on the same machine. The PCs MB doesn’t support PXE, so I use iPXE and boot it from a CD. The PC gets an IP from the DHCP, then iPXE loads #pxelinux.cfg/default, which I edited like this: timeout 5 prompt 0 default install label install kernel ubuntu-installer/i386/linux append vga=normal locale=en_GB setup/layoutcode=sl_SI console-setup/layoutcode=sl_SI netcfg/choose_interface=auto initrd=ubuntu-installer/i386/initrd.gz netcfg/get_hostname=ubuntux preseed/url=#http://192.168.10.10/ins/preseed.cfg Then it loads the linux kernel and the initrd.gz image. Then I got a question: Detect keyboard layout? I desided to bother with this later. So I answer No, and then twice on Englishjust to get trough and then I get to the error: The installer failed to process the preconfiguration file from #http://192.168.10.10/ins/preseeed.cfg. The file may be corrupt. I created the file myself and copied the d-I commands into it. I also tried to get the preseed.cfg over a web browser and it works fine. So why is the installer failing?

    Read the article

  • Traktor Audio 2 DJ soundcard configuration

    - by Jaroslav
    I have a Traktor Audio 2 DJ USB sound card (the first version of what it's now called simply Traktor Audio 2) The problem in settings it only sees one output, when there should be two (I need that for Mixxx etc.) Also I want to be able set the sample rate to one of these: 44.1, 48, 88.2, 96 kHz or at least check which one is set. Additionally if possible setting the latency would be an advantage. Some info: $ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: HDMI [HDA ATI HDMI], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: TraktorAudio2 [Traktor Audio 2], device 0: Traktor Audio 2 [Traktor Audio 2] Subdevices: 1/2 Subdevice #0: subdevice #0 Subdevice #1: subdevice #1 $ cat /proc/asound/cards 0 [HDMI ]: HDA-Intel - HDA ATI HDMI HDA ATI HDMI at 0xfdcfc000 irq 45 1 [TraktorAudio2 ]: snd-usb-caiaq - Traktor Audio 2 Native Instruments Traktor Audio 2 (usb-0000:00:1d.7-8)

    Read the article

  • Complete RESTful API debugging/testing tool

    - by vartec
    I'm looking for the most complete tool, preferably portable GUI or browser plugin to test RESTful API. What I need is: GET/POST/DELETE/PUT support multiple file uploads as fields (multipart/form-data) file uploads as body Extra points for: possibility to save multiple configurations and use them to pre-fill parameters OAuth support nice JSON response formatting Currently I'm using 3 tools: Chrome REST Console extension — My favorite, very nicely done. Has OAuth. However the functionality missing for me is sending file as a body of the request; Cannot send multiple files; Firefox Poster add-on — Quite nice, but the functionality it's missing for file as POST fields parameters; Also cannot send multiple files; cURL — can do anything, but it's quite tedious to use it from command line.

    Read the article

  • Testing Reference Data Mappings

    - by Michael Stephenson
    Background Mapping reference data is one of the common scenarios in BizTalk development and its usually a bit of a pain when you need to manage a lot of reference data whether it be through the BizTalk Cross Referencing features or some kind of custom solution. I have seen many cases where only a couple of the mapping conditions are ever tested. Approach As usual I like to see these things tested in isolation before you start using them in your BizTalk maps so you know your mapping functions are working as expected. This approach can be used for almost all of your reference data type mapping functions where you can take advantage of MSTests data driven tests to test lots of conditions without having to write millions of tests. Walk Through Rather than go into the details of this here, I'm going to call out to one of my colleagues who wrote a nice little walk through about using data driven tests a while back. Check out Callum's blog: http://callumhibbert.blogspot.com/2009/07/data-driven-tests-with-mstest.html

    Read the article

  • ReportViewer configuration

    - by Suresh Behera
    Some of my team member was facing configuring report viewer.Most of the post are confusing or not able to understand properly. This is what we concluded and thought to put a quick note on it. No 1 Section : This is the report name No 2 Section : ReportServer make the default name on URL sometime even if you don’t see on browser you still need to try with “Reportserver” on url. In our case the DB/report team did not mentioned nothing about “ReprtServer” on URL but it was needed. No 3. “Adventure Works...(read more)

    Read the article

  • Helper class to dynamically modify the Location configuration element

    - by anas
    The location element is used to restrict user or role access on a specific path.The path could be a folder,aspx page,ashx,axd or any other file that is handled by ASP.NET runtime. In most cases, you use that element declarativley in the web.config file of your website.In this case, you are declaratively telling the ASP.NET runtime and specifically the UrlAuthorizationModule or the FileAuthorizationModule (depending on the Authentication Mode) to grant/deny the access to that path for the specified...(read more)

    Read the article

  • How do you handle measuring Code Coverage in JavaScript

    - by Dancrumb
    In order to measure Code Coverage for JavaScript unit tests, one needs to instrument the code, run the tests and then perform post-processing. My concern is that, as a result, you are unit testing code that will never be run in production. Since JavaScript isn't compiled, what you test should be precisely what you execute. So here's my question, how do you handle this? One thought I had was to run Unit Testing on the production code and use that for my pass fail. I would then create a shadow of my production code, with instrumentation and run my unit tests again; this would give me my code coverage stats. Has anyone come across a method that is a little more graceful than this?

    Read the article

  • Samba new file ownership, permissions configuration

    - by Martin Melka
    I have recently installed Samba on my server. Now I have a question about permissions and how to set it up. Currently I mount the Samba shared drive to my laptop with this line in /etc/fstab: //<host>/share /mnt/melka-server-data/ cifs username=<usrname> password=<passwd> _netdev 0 0 This works, as I can read from the files and create them (as root). The problem is when I want to create files as a regular user. I always get a Permission Denied error. These are ll outputs of the mounted folder: magicmaster@magicmaster-kubuntu:/mnt$ ll total 8 drwxr-xr-x 3 root root 4096 lis 11 14:15 ./ drwxr-xr-x 26 root root 4096 ríj 26 11:01 ../ drwxrwxrwx 8 magicmaster magicmaster 0 lis 12 22:12 melka-server-data/ and the inside: magicmaster@magicmaster-kubuntu:/mnt/melka-server-data$ ll total 4 drwxrwxrwx 8 magicmaster magicmaster 0 lis 12 22:12 ./ drwxr-xr-x 3 root root 4096 lis 11 14:15 ../ drwxrwxrwx 5 magicmaster magicmaster 0 lis 12 09:35 downloads/ drwxrwxrwx 2 magicmaster magicmaster 0 ríj 28 12:57 lost+found/ drwxrwxrwx 15 magicmaster magicmaster 0 lis 12 09:45 movies/ drwxrwxrwx 2 magicmaster magicmaster 0 lis 1 21:15 newest/ drwxrwxrwx 3 magicmaster magicmaster 0 lis 2 23:14 photos/ drwxrwxrwx 2 magicmaster magicmaster 0 ríj 30 12:44 software/ -rw-r--r-- 1 nobody nogroup 0 lis 12 22:12 zdar I called sudo chown -R magicmaster:magicmaster melka-server-data/ to try and change all the files to belong to me. Then the file zdar was created by magicmaster just by calling touch. I got the Permission Denied, but it was still created, though it belongs to nobody and I can't write into it. When I create a file as root, it still belongs to nobody, but at least I can write into it. What am I missing? I didn't notice anything in Samba config that would be related to this and I don't like the idea of having to log on as root in order to copy files.. Thanks

    Read the article

  • Rebuilding CoasterBuzz, Part IV: Dependency injection, it's what's for breakfast

    - by Jeff
    (Repost from my personal blog.) This is another post in a series about rebuilding one of my Web sites, which has been around for 12 years. I hope to relaunch soon. More: Part I: Evolution, and death to WCF Part II: Hot data objects Part III: The architecture using the "Web stack of love" If anything generally good for the craft has come out of the rise of ASP.NET MVC, it's that people are more likely to use dependency injection, and loosely couple the pieces parts of their applications. A lot of the emphasis on coding this way has been to facilitate unit testing, and that's awesome. Unit testing makes me feel a lot less like a hack, and a lot more confident in what I'm doing. Dependency injection is pretty straight forward. It says, "Given an instance of this class, I need instances of other classes, defined not by their concrete implementations, but their interfaces." Probably the first place a developer exercises this in when having a class talk to some kind of data repository. For a very simple example, pretend the FooService has to get some Foo. It looks like this: public class FooService {    public FooService(IFooRepository fooRepo)    {       _fooRepo = fooRepo;    }    private readonly IFooRepository _fooRepo;    public Foo GetMeFoo()    {       return _fooRepo.FooFromDatabase();    } } When we need the FooService, we ask the dependency container to get it for us. It says, "You'll need an IFooRepository in that, so let me see what that's mapped to, and put it in there for you." Why is this good for you? It's good because your FooService doesn't know or care about how you get some foo. You can stub out what the methods and properties on a fake IFooRepository might return, and test just the FooService. I don't want to get too far into unit testing, but it's the most commonly cited reason to use DI containers in MVC. What I wanted to mention is how there's another benefit in a project like mine, where I have to glue together a bunch of stuff. For example, when I have someone sign up for a new account on CoasterBuzz, I'm actually using POP Forums' new account mailer, which composes a bunch of text that includes a link to verify your account. The thing is, I want to use custom text and some other logic that's specific to CoasterBuzz. To accomplish this, I make a new class that inherits from the forum's NewAccountMailer, and override some stuff. Easy enough. Then I use Ninject, the DI container I'm using, to unbind the forum's implementation, and substitute my own. Ninject uses something called a NinjectModule to bind interfaces to concrete implementations. The forum has its own module, and then the CoasterBuzz module is loaded second. The CB module has two lines of code to swap out the mailer implementation: Unbind<PopForums.Email.INewAccountMailer>(); Bind<PopForums.Email.INewAccountMailer>().To<CbNewAccountMailer>(); Piece of cake! Now, when code asks the DI container for an INewAccountMailer, it gets my custom implementation instead. This is a lot easier to deal with than some of the alternatives. I could do some copy-paste, but then I'm not using well-tested code from the forum. I could write stuff from scratch, but then I'm throwing away a bunch of logic I've already written (in this case, stuff around e-mail, e-mail settings, mail delivery failures). There are other places where the DI container comes in handy. For example, CoasterBuzz does a number of custom things with user profiles, and special content for paid members. It uses the forum as the core piece to managing users, so I can ask the container to get me instances of classes that do user lookups, for example, and have zero care about how the forum handles database calls, configuration, etc. What a great world to live in, compared to ten years ago. Sure, the primary interest in DI is around the "separation of concerns" and facilitating unit testing, but as your library grows and you use more open source, it starts to be the glue that pulls everything together.

    Read the article

  • Has test driven development (TDD) actually benefited a real world project?

    - by James
    I am not new to coding. I have been coding (seriously) for over 15 years now. I have always had some testing for my code. However, over the last few months I have been learning test driven design/development (TDD) using Ruby on Rails. So far, I'm not seeing the benefit. I see some benefit to writing tests for some things, but very few. And while I like the idea of writing the test first, I find I spend substantially more time trying to debug my tests to get them to say what I really mean than I do debugging actual code. This is probably because the test code is often substantially more complicated than the code it tests. I hope this is just inexperience with the available tools (RSpec in this case). I must say though, at this point, the level of frustration mixed with the disappointing lack of performance is beyond unacceptable. So far, the only value I'm seeing from TDD is a growing library of RSpec files that serve as templates for other projects/files. Which is not much more useful, maybe less useful, than the actual project code files. In reading the available literature, I notice that TDD seems to be a massive time sink up front, but pays off in the end. I'm just wondering, are there any real world examples? Does this massive frustration ever pay off in the real world? I really hope I did not miss this question somewhere else on here. I searched, but all the questions/answers are several years old at this point. It was a rare occasion when I found a developer who would say anything bad about TDD, which is why I have spent as much time on this as I have. However, I noticed that nobody seems to point to specific real-world examples. I did read one answer that said the guy debugging the code in 2011 would thank you for have a complete unit testing suite (I think that comment was made in 2008). So, I'm just wondering, after all these years, do we finally have any examples showing the payoff is real? Has anybody actually inherited or gone back to code that was designed/developed with TDD and has a complete set of unit tests and actually felt a payoff? Or did you find that you were spending so much time trying to figure out what the test was testing (and why it was important) that you just tossed out the whole mess and dug into the code?

    Read the article

  • Wireless USB Adapter driver configuration

    - by Jones
    First I would like agree one thing that this is fully for Ubuntu, not for BackTrack. However I'm posting my BackTrack question here since I find similar problems in backtrack forum are being unanswered and Linux/Unix site of StackExchange has very low attraction. I'm having USB wifi adapter (iBall Baton), having chipset RTL8191SU. It displays available wifi networks in Wicd manager. But while trying to connect it gives me bad password. When I'm trying to run airmon-ng it didn't returned me the monitor mode. I tried to replace with Ubuntu 12.10 drivers. But it totally nulled the device functionality. However I restored the drivers. I'm interested to compile drivers as if anyone indicate the methods. Awaiting ideas!

    Read the article

  • SQLAuthority News Storage and SQL Server Capacity Planning and configuration SharePoint Server 201

    Just a day ago, I was asked how do you plan SQL Server Storage Capacity. Here is the excellent article published by Microsoft regarding SQL Server capacity planning for SharePoint 2010. This article touches all the vital areas of this subject. Here are the bullet points for the same. Gather storage and SQL Server space [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Non-public site for testing on shared-hosting site

    - by ptpaterson
    Is it possible to as a developer using a shared hosting site such as bluehost, hostgator, and the like, to view your site without making it public. Or do the files you upload always go live immediately? Is the best way to test a site (if using shared hosting) to just set up some apache/mysql/php service on my machine? I am considering putting together a site with shared hosting, and trying to see what all my options are. Thanks.

    Read the article

  • News From EAP Testing

    - by Fatherjack
    There is a phrase that goes something like “Watch the pennies and the pounds/dollars will take care of themselves”, meaning that if you pay attention to the small things then the larger things are going to fare well too. I am lucky enough to be a Friend of Red Gate and once in a while I get told about new features in their tools and have a test copy of the software to trial. I got one of those emails a week or so ago and I have been exploring the SQL Prompt 6 EAP since then. One really useful feature of long standing in SQL Prompt is the idea of a code snippet that is automatically pasted into the SSMS editor when you type a few key letters. For example I can type “ssf” and then press the tab key and the text is expanded to SELECT * FROM. There are lots of these combinations and it is possible to create your own really easily. To create your own you use the Snippet Manager interface to define the shortcut letters and the code that you want to have put in their place. Let’s look at an example. Say I am writing a blog about something and want to have the demo code create a temporary table. It might looks like this; The first time you run the code everything is fine, a lovely set of dates fill the results grid but run it a second time and this happens.   Yep, we didn’t destroy the temporary table so the CREATE statement fails when it finds the table already exists. No matter, I have a snippet created that takes care of this.   Nothing too technical here but you will see that in the Code section there is $CURSOR$, this isn’t a TSQL keyword but a marker for SQL Prompt to place the cursor in that position when the Code is pasted into the SSMS Editor. I just place my cursor above the CREATE statement and type “ifobj” – the shortcut for my code to DROP the temporary table – which has been defined in the Snippet Manager as below. This means I am right-away ready to type the name of the offending table. Pretty neat and it’s been very useful in saving me lots of time over many years.   The news for SQL Prompt 6 is that Red Gate have added a new Snippet Command of $PASTE$. Let’s alter our snippet to the following and try it out   Once again, we will type type “ifobj” in the SSMS Editor but first of all, highlight the name of the table #TestTable and copy it to your clipboard. Now type “ifobj” and press Tab… Wherever the string $PASTE$ is placed in the snippet, the contents of your clipboard are merged into the pasted TSQL. This means I don’t need to type the table name into the code snippet, it’s already there and I am seeing a fully functioning piece of TSQL ready to run. This means it is it even easier to write TSQL quickly and consistently. Attention to detail like this from Red Gate means that their developer tools stay on track to keep winning awards year after year and help take the hard work out of writing neat, accurate TSQL. If you want to try out SQL Prompt all the details are at http://www.red-gate.com/products/sql-development/sql-prompt/.

    Read the article

  • Docker vs ESXi for Startup Projects - Deploying Code for Dev Testing

    - by JasonG
    Why hello there little programmer dude! I have a question for you and all of your experience and knowledge. I have an ESXi whitebox that I built which is an 8 dude that sits in the corner. I made a mistake recently and took the key that had ESXi, formatted it and used it for something else. No big deal because the last project I worked on had stalled out. I'm about to pick up another project and now I need to spin up a whole bunch of stuff for CI, qa + db, ticket tracker, wikis etc etc. I've been hearing a lot about Docker recently and as this is just a consumer grade machine, I'm wondering if it may make more sense for me to use Docker on OpenOS and then put everything there - bamboo or hudson, jira, confluence, postgress for the tools to use, then a qa env. I can't really seem to find any documents that directly compare traditional VM infrastructure vs docker solutions and I'm wondering if it is fair to compare. Is there any reason why CoreOS w/ containers would be a strictly worse solution? Or do you have any insight into why I may want to stick with ESXi? I've looked on multiple occasions and can't find a good reason not to. I'm not going to run a production env on the server so I don't need to have HA if updating security or OS for example where esxi would allow me to restart one vm at a time. I can just shut the thing down and bring it back up if I need a reboot no problem. So what's up with this container stuff? Is it a fair replacement for ESXi? I'm guessing the atlassian products would run much better and my ram would go a lot farther using docker. Probably the CPU would run much cooler too and my expensive HDD space would be better utilized.

    Read the article

  • SQL Server v.Next (Denali) : Why you should start testing early

    - by AaronBertrand
    Denali is coming, whether you like it or not. You may not be an early adopter and you may not have plans on your current calendar, but at some point you will need to move your apps and databases to this release - or one very much like it. There are a lot of great new features you will be able to take advantage of, but not everything is a double rainbow. There are some changes that will break your spirit if you let them. What does it mean? I go over several breaking changes in my presentation that...(read more)

    Read the article

  • Severity and relation to occurence - priority?

    - by user970696
    I have been browsing through some webpages related to testing and found one dealing with the metrics of testing. It says: The severity level of a defect indicates the potential business impact for the end user (business impact = effect on the end user x frequency of occurrence). I do not think think this is correct or what am I missing? Usually it is the priority which is the result of such a calculation (severe bug that occurs rarely is still severe but does not have to be fixed immediately). Also from this description, what is the difference between the effect on the end user and business impact?

    Read the article

  • Partition Configuration to avoid reinstalling applications after update

    - by nightcrawler
    The major bane when I update Ubuntu (which is way more frequent than Windows) is that I lose all installed application. To be precise I do a lot of Maple, Matlab, Geogebra & for all of those I install Java platform which too isnt very straightforward plus the license management things, which really give me craps. I don't install application in /home (to be made available to all users) thus a separate /home partition is meaningless. Can we circumvent this problem somehow such that Java dependent applications along with JDK doesn't blow away after update, may be by a separate partition (just like /home) where only custom (other than provided by Ubuntu Software Center) install application resides Further: I use specific binary of Java (Java6 update 32), its an important requirement for me, thus I don't want to let it crash/overwritten or similar

    Read the article

  • Juju LXC configuration

    - by Preethi
    I've looked at this post (http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage) for setting up juju on a local environment with lxc. However, is there a way to use juju with lxc in a non-local environment? I am looking at a scenario where lxc containers are deployed on multiple nodes. I.e., lets say I have virtual machines m1 and m2 with wordpress deployed on a container in m1 and mysql deployed on a container in m2. Is there a way to orchestrate these deployments with juju?

    Read the article

  • Testing Git competence

    - by David
    I hire a lot of programmers for tiny tasks. I very clearly specify that the tasks can only be completed by making a pull request on GitHub. Unfortunatelly, so many programmers do not know Git and often the programmers cannot complete the project due to not understanding/being willing to learn Git, even after they have undertaken the programming of the task. This is bad both for me and for the programmers. Sometimes I end up arguing for why it is inefficient that they just send me a zip file containing the code. Therefore, I am looking for an online service to certify that the programmers know how to make a pull request so I do not waste their nor my time. The certificate should be free for the coders, but may cost me. It is important that the course just focuses on exactly what is needed to make a clean pull request so it should not take more than 5 minutes to go through. Does such a thing exist?

    Read the article

  • Configuration Tips for better Performance with ADF Mobile Apps

    - by SRINI INDLA
    Some tips to keep in mind to make sure ADF Mobile application's performance is optimal: 1. Select release mode in deployment profile. This is perhaps the most important thing to remember to ensure best performance for ADF Mobile Apps. Selecting this option causes the deployer to package optimized JVM and minified JS libs with the mobile app there by significantly improving the over all performance of the application. 2. For iOS you do not need to do anything else other than selecting  release mode in deploy profile. However, on Android you have to create a keystore and configure it in JDev --> Tools --> Preferences --> ADF Mobile --> Platforms : Android as shown in the snapshot below 3. Steps for generating the Keystore for Android using keytool :  4. Logging level setting in logging.properties: Make sure the log level is set to SEVERE for both framework logger as well as the application logger as follows oracle.adfmf.framework.level=SEVERE oracle.adfmf.application.level=SEVERE 5. When using SOAP WebServices with WebService Data Control make sure you select the option to copy the WSDL. This will cause the JDev to download the WSDL and all the XSDs referenced by the WSDL from the server at design time and package them with the application during deployment. This way the application does not incur the cost of downloading these resources at run time from the device.

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >