Search Results

Search found 4429 results on 178 pages for 'exchange transport agents'.

Page 104/178 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • What browser is sending user agent beginning mozilla/5.0+, tramslates & into &amp;

    - by Patrick
    We've got a website which has been running for a few years now. One of our customers has just started having an intermittent problem. Looking at our iis6.0 logs the service works correctly when they have a user agent beginning "mozilla/4.0+" but fails when the user agent begins "mozilla/5.0+". The particular customer only started having this problem on Wednesday. Does anyone know the browser/upgrade which changes the 4.0 to 5.0? The actual problem caused is that an "&" in a url parameter list is being encoded as "&amp;". Anyone seen anything similar? We have other users sending from browsers with the 5.0+ user agent without trouble. Sorry about the tags but don't have the rep to create new ones. Thanks in advance, Patrick Edit: hi Viper_sb, It is most probably a custom script (I'm primarily a c++ developer so don't really understand). Our site services requests from other customer developed sites, this one was done in Java script as far as I know. we're actually getting a variety of user agents (presumably depending on which of our customers customers is accessing the service), here's a few: Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+fr;+rv:1.9.1.11)+Gecko/20100701+Firefox/3.5.11 Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+en-US)+AppleWebKit/533.4+(KHTML,+like+Gecko)+Chrome/5.0.375.126+Safari/533.4 302 0 0 Mozilla/5.0+(Macintosh;+U;+PPC+Mac+OS+X;+fr)+AppleWebKit/523.12+(KHTML,+like+Gecko)+Version/3.0.4+Safari/523.12 Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+en-US;+rv:1.9.2.8)+Gecko/20100722+Firefox/3.6.8 Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+fr;+rv:1.9.2.8)+Gecko/20100722+Firefox/3.6.8+(.NET+CLR+3.5.30729)

    Read the article

  • How to manage configuration & automatic rollout of 20 virtual machines

    - by Lucas Meijer
    I have a TeamCity build server, with about 20 "build agents", both Windows and MacOS machines. Often, I need to install a newer version of XCode or VisualStudio or some other tool. Having to do this on all machines manually is boring and error prone. I'm trying to find out what is the best way to achieve the following: make it easy to change a system configuration, without having to do it on all machines manually. make it easy to add a new machine to the group. ensure the machines are as identical as possible The jobs these machines are executing is relatively heavy, fully consuming 8 cores, and be very heavy on IO. It's fine if the solution includes spending money.

    Read the article

  • Get Started using Build-Deploy-Test Workflow with TFS 2012

    - by Jakob Ehn
    TFS 2012 introduces a new type of Lab environment called Standard Environment. This allows you to setup a full Build Deploy Test (BDT) workflow that will build your application, deploy it to your target machine(s) and then run a set of tests on that server to verify the deployment. In TFS 2010, you had to use System Center Virtual Machine Manager and involve half of your IT department to get going. Now all you need is a server (virtual or physical) where you want to deploy and test your application. You don’t even have to install a test agent on the machine, TFS 2012 will do this for you! Although each step is rather simple, the entire process of setting it up consists of a bunch of steps. So I thought that it could be useful to run through a typical setup.I will also link to some good guidance from MSDN on each topic. High Level Steps Install and configure Visual Studio 2012 Test Controller on Target Server Create Standard Environment Create Test Plan with Test Case Run Test Case Create Coded UI Test from Test Case Associate Coded UI Test with Test Case Create Build Definition using LabDefaultTemplate 1. Install and Configure Visual Studio 2012 Test Controller on Target Server First of all, note that you do not have to have the Test Controller running on the target server. It can be running on another server, as long as the Test Agent can communicate with the test controller and the test controller can communicate with the TFS server. If you have several machines in your environment (web server, database server etc..), the test controller can be installed either on one of those machines or on a dedicated machine. To install the test controller, simply mount the Visual Studio Agents media on the server and browse to the vstf_controller.exe file located in the TestController folder. Run through the installation, you might need to reboot the server since it installs .NET 4.5. When the test controller is installed, the Test Controller configuration tool will launch automatically (if it doesn’t, you can start it from the Start menu). Here you will supply the credentials of the account running the test controller service. Note that this account will be given the necessary permissions in TFS during the configuration. Make sure that you have entered a valid account by pressing the Test link. Also, you have to register the test controller with the TFS collection where your test plan is located (and usually the code base of course) When you press Apply Settings, all the configuration will be done. You might get some warnings at the end, that might or might not cause a problem later. Be sure to read them carefully.   For more information about configuring your test controllers, see Setting Up Test Controllers and Test Agents to Manage Tests with Visual Studio 2. Create Standard Environment Now you need to create a Lab environment in Microsoft Test Manager. Since we are using an existing physical or virtual machine we will create a Standard Environment. Open MTM and go to Lab Center. Click New to create a new environment Enter a name for the environment. Since this environment will only contain one machine, we will use the machine name for the environment (TargetServer in this case) On the next page, click Add to add a machine to the environment. Enter the name of the machine (TargetServer.Domain.Com), and give it the Web Server role. The name must be reachable both from your machine during configuration and from the TFS app tier server. You also need to supply an account that is a local administration on the target server. This is needed in order to automatically install a test agent later on the machine. On the next page, you can add tags to the machine. This is not needed in this scenario so go to the next page. Here you will specify which test controller to use and that you want to run UI tests on this environment. This will in result in a Test Agent being automatically installed and configured on the target server. The name of the machine where you installed the test controller should be available on the drop down list (TargetServer in this sample). If you can’t see it, you might have selected a different TFS project collection. Press Next twice and then Verify to verify all the settings: Press finish. This will now create and prepare the environment, which means that it will remote install a test agent on the machine. As part of this installation, the remote server will be restarted. 3-5. Create Test Plan, Run Test Case, Create Coded UI Test I will not cover step 3-5 here, there are plenty of information on how you create test plans and test cases and automate them using Coded UI Tests. In this example I have a test plan called My Application and it contains among other things a test suite called Automated Tests where I plan to put test cases that should be automated and executed as part of the BDT workflow. For more information about Coded UI Tests, see Verifying Code by Using Coded User Interface Tests   6. Associate Coded UI Test with Test Case OK, so now we want to automate our Coded UI Test and have it run as part of the BDT workflow. You might think that you coded UI test already is automated, but the meaning of the term here is that you link your coded UI Test to an existing Test Case, thereby making the Test Case automated. And the test case should be part of the test suite that we will run during the BDT. Open the solution that contains the coded UI test method. Open the Test Case work item that you want to automate. Go to the Associated Automation tab and click on the “…” button. Select the coded UI test that you corresponds to the test case: Press OK and the save the test case For more information about associating an automated test case with a test case, see How to: Associate an Automated Test with a Test Case 7. Create Build Definition using LabDefaultTemplate Now we are ready to create a build definition that will implement the full BDT workflow. For this purpose we will use the LabDefaultTemplate.11.xaml that comes out of the box in TFS 2012. This build process template lets you take the output of another build and deploy it to each target machine. Since the deployment process will be running on the target server, you will have less problem with permissions and firewalls than if you were to remote deploy your solution. So, before creating a BDT workflow build definition, make sure that you have an existing build definition that produces a release build of your application. Go to the Builds hub in Team Explorer and select New Build Definition Give the build definition a meaningful name, here I called it MyApplication.Deploy Set the trigger to Manual Define a workspace for the build definition. Note that a BDT build doesn’t really need a workspace, since all it does is to launch another build definition and deploy the output of that build. But TFS doesn’t allow you to save a build definition without adding at least one mapping. On Build Defaults, select the build controller. Since this build actually won’t produce any output, you can select the “This build does not copy output files to a drop folder” option. On the process tab, select the LabDefaultTemplate.11.xaml. This is usually located at $/TeamProject/BuildProcessTemplates/LabDefaultTemplate.11.xaml. To configure it, press the … button on the Lab Process Settings property First, select the environment that you created before: Select which build that you want to deploy and test. The “Select an existing build” option is very useful when developing the BDT workflow, because you do not have to run through the target build every time, instead it will basically just run through the deployment and test steps which speeds up the process. Here I have selected to queue a new build of the MyApplication.Test build definition On the deploy tab, you need to specify how the application should be installed on the target server. You can supply a list of deployment scripts with arguments that will be executed on the target server. In this example I execute the generated web deploy command file to deploy the solution. If you for example have databases you can use sqlpackage.exe to deploy the database. If you are producing MSI installers in your build, you can run them using msiexec.exe and so on. A good practice is to create a batch file that contain the entire deployment that you can run both locally and on the target server. Then you would just execute the deployment batch file here in one single step. The workflow defines some variables that are useful when running the deployments. These variables are: $(BuildLocation) The full path to where your build files are located $(InternalComputerName_<VM Name>) The computer name for a virtual machine in a SCVMM environment $(ComputerName_<VM Name>) The fully qualified domain name of the virtual machine As you can see, I specify the path to the myapplication.deploy.cmd file using the $(BuildLocation) variable, which is the drop folder of the MyApplication.Test build. Note: The test agent account must have read permission in this drop location. You can find more information here on Building your Deployment Scripts On the last tab, we specify which tests to run after deployment. Here I select the test plan and the Automated Tests test suite that we saw before: Note that I also selected the automated test settings (called TargetServer in this case) that I have defined for my test plan. In here I define what data that should be collected as part of the test run. For more information about test settings, see Specifying Test Settings for Microsoft Test Manager Tests We are done! Queue your BDT build and wait for it to finish. If the build succeeds, your build summary should look something like this:

    Read the article

  • Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder?

    - by Jason Fitzpatrick
    If you’re currently using any 64-bit version of Windows you may have noticed there are two “Program Files” folders, one for 64-bit and one for 32-bit apps. Why does Windows need to sub-divide them? Read on to see why. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-drive grouping of Q&A web sites. Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows

    Read the article

  • Monitoring HP and Dell hardware in Gentoo.

    - by ewwhite
    I'm working in an environment that features a large number of Gentoo servers running on HP ProLiant and Dell PowerEdge equipment. While I've moved some of these systems to RedHat or CentOS for consistency, I'm still left with a good number of systems that will remain Gentoo. One of the issues I see with the Gentoo arrangement is lack of vendor-supported hardware monitoring. There doesn't seem to be an equivalent to the HP ProLiant Support Pack or Dell's agents for Gentoo. Is this simply something that you give up when using this distribution? How do you monitor hardware health and the like with Gentoo systems?

    Read the article

  • Oracle Linux / Symantec Partnership

    - by Ted Davis
    Fred Astaire and Ginger Rogers sang the now famous lyrics:  “You like to-may-toes and I like to-mah-toes”. In the tech world, is it Semantic or is it Symantec? Ah, well, we know it’s the latter. Actually, who doesn’t know or hasn’t heard of Symantec in the tech world? Symantec is thoroughly engrained in Enterprise customer infrastructure from their Storage Foundation Suite to their Anti-Virus products. It would be hard to find anyone who doesn’t use their software. Likewise, Oracle Linux is thoroughly engrained in Enterprise infrastructure – so our paths cross quite a bit. This is why the Oracle Linux  engineering team works with Symantec to make sure their applications and agents are supported on Oracle Linux. We also want to make sure the Oracle Linux / Symantec customer experience is trouble free so customer work continues at the same blistering pace. Here are a few Symantec applications that are supported on Oracle Linux: Storage Foundation Netbackup Enterprise Server Symantec Antivirus For Linux Veritas Cluster Server Backup Exec Agent for Linux So, while Fred and Ginger may disagree on how to spell tomato, for our software customers, the Oracle / Symantec partnership works together so our joint customers experience and hear the sweet song of success.

    Read the article

  • SOA Suite HealthCare Integration Architecture

    - by Nitesh Jain
    Oracle SOA Suite for HealthCare integration is an integrated, best-of-breed suite that helps HealthCare organizations rapidly design and assemble, deploy and manage, highly agile and adaptable business applications.It  will help healthcare industry to  reduce operating costs and speeds time-to-market by delivering a consistent user interface, management console and monitoring environment, as well as healthcare libraries and templates for healthcare customer projects.Oracle SOA Suite for healthcare integration is fully configurable and extensible, providing a highly flexible platform for collaboration across all healthcare domains.Healthcare message standards support:    Messaging standards - HL7, HIPAA, Custom , X12N    Exchange standards - MLLP (v1.0, v2.0), TCP/IP, File, FTP, SFTP, JMSSimplified dashboards and customized reports helps users to advanced monitoring capabilities that support end-to-end healthcare message tracking.A toolkit for rapid HIPAA 5010 upgrade and compliance provides pre-defined healthcare integration mapping for HIPAA standards that is fully customizable and extensible.MLLP-HA helps easily failover and disaster recovery which makes system running on the long time without any issue.Audit keeps track of all the system changes. Alert and notification (SMS,Email etc) helps user to take the fast action and gives tracking on the real-time.

    Read the article

  • Low coupling processing big quantities of data

    - by vitalik
    Usually I achieve low coupling by creating classes that exchange lists, sets, and maps between them. Now I am developing a batch application and I can't put all the data inside a data structure because there isn't enough memory. I have to read and process one chunk of data and then going to the next one. So having low coupling is much more difficult because I have to check somewhere if there is still data to read, etc. What I am using now is: Source - Process - Persist The classes that process have to ask to the Source classes if there are more rows to read. What are the best practices and or useful patterns in such situations? I hope I am explaining myself, if not tell me.

    Read the article

  • "Optimal" game loop for 2D side-scroller

    - by MrDatabase
    Is it possible to describe an "optimal" (in terms of performance) layout for a 2D side-scroller's game loop? In this context the "game loop" takes user input, updates the states of game objects and draws the game objects. For example having a GameObject base class with a deep inheritance hierarchy could be good for maintenance... you can do something like the following: foreach(GameObject g in gameObjects) g.update(); However I think this approach can create performance issues. On the other hand all game objects' data and functions could be global. Which would be a maintenance headache but might be closer to an optimally performing game loop. Any thoughts? I'm interested in practical applications of near optimal game loop structure... even if I get a maintenance headache in exchange for great performance.

    Read the article

  • Amazon Kindle Fire User Agent String

    - by Gopinath
    Today I was searching for Amazon Kindle Fire user agent string so that I can trick websites as if I’m browsing using Kindle Fire. To my surprise I found the following two variants of user agents listed on blogs but not sure which one is right or if Kindle Fire generating two types of User Agent strings. The first one is given by the prominent blogger and WSJ tech columnist Amit Agarwal and I vote for him as he is a highly reputed person. Mozilla/5.0 (Linux; U; Android 2.3.4; en-us; Kindle Fire Build/GINGERBREAD) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1 The second variant is found on this website and I’m not sure about the authority of the blogger. Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_3; en-us; Silk/1.1.0-80) AppleWebKit/533.16 (KHTML, like Gecko) Version/5.0 Safari/533.16 Silk-Accelerated=true This article titled,Amazon Kindle Fire User Agent String, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Robots.txt practices with .htaccess redirections (inherits)

    - by Jayhal
    I have a question regarding how to write robots.txt files for many domains and subdomains with redirects in place. We have a hosting account that enacts primary and add-on domains. All of our domains and subdomains, including the primary domain, is redirected via htaccess 301s to their own subdirectories in the primary domain's root directory. I'm confused about how I would write the robots.txt for certain directories. First, I wanted to confirm I am right in understanding that for domains and subdomains, crawlers will look to the directory that acts as that urls root directory for the crawling rules(robots.txt). Also, that a directory will not be affected by a robots.txt present in their parent directory if the directory has its own domain/subdomain, and that url is the one being accessed by crawlers. (Am pretty sure, but I wanted to confirm I didnt have a fundamentally flawed understanding of robots.txt) In the original root directory on the account(where the primary domain was directed before htaccess was put in place) what should the robots.txt contain? When crawlers look to crawl our primary domain, will they look to the original root directory for the robots.txt or will they reference the file contained in the new subdirectory where all the primary domain's site files are located? If so, what should the root's robot.txt include if anything at all. Would I be right to include a simple 'disallow: /' for all agents, and then include more specific robots.txt files in each subdirectory with more specific instructions. Would that affect the crawling of the directory where the primary domain is now redirected? Any help is greatly appreciated, Thanks!

    Read the article

  • How to introduce versioning for questions on Stack*? [closed]

    - by András Szepesházi
    What today is the best answer for any given question, yesterday was not available and tomorrow will be obsolete. Especially when we're talking about software development. Here is an example for you (there must be thousands, this one is absolutely imaginary): Q: What is the best way to implement autocomplete in javascript? A: (2000) Whut? A: (2007) Write a custom ajax function, display the results after processing A: (2011) Use this plugin: http://jqueryui.com/demos/autocomplete/ (nono, I'm not a jQuery affiliate, actually I prefer MooTools) What would be your recommendation to introduce versioning for Stack Exchange questions and answers? Is there a need at all for that?

    Read the article

  • Finding rows that intersect with a date period

    - by DavidWimbush
    This one is mainly a personal reminder but I hope it helps somebody else too. Let's say you have a table that covers something like currency exchange rates with columns for the start and end dates of the period each rate was applicable. Now you need to list the rates that applied during the year 2009. For some reason this always fazes me and I have to work it out with a diagram. So here's the recipe so I never have to do that again: select  * from    ExchangeRate where   StartDate < '31-DEC-2009'         and EndDate > '01-JAN-2009'   That is all!

    Read the article

  • Using Asset Groups

    - by Owen Allen
    I got a question about putting assets in groups: "I'm planning on installing some agents manually on existing systems, and I want to have them put in a specific asset group once they're discovered. I don't see any way to tell the install script to put the asset in a group. How can I add the assets to a group, either through the UI or the CLI?" There are a few ways. In the CLI, you can use groups mode, and use this command to add an asset to a group: attach -n| --gear <asset name> -g| --group <group> You can also use -U| --uuid <UUID> to specify the asset if you have multiple assets with the same name. In the UI, you have a couple of options. You can select an asset and click Add Asset to Group to add it to a group you select. Alternatively, if you're trying to make a group for assets with a specific characteristic, you can specify rules that will automatically add assets to a group based on that characteristic.

    Read the article

  • SQL Server agent died

    - by Manjot
    3 SQL server agents stopped and didn't restart automatically untill we manually restarted them. SQL servers are on SQL 2005 X86 standard edition. The error message that was reported in Application log was: Faulting application sqlagent90.exe, version 2005.90.4211.0, stamp 49837478, faulting module kernel32.dll, version 5.2.3790.4480, stamp 49c51f0a, debug? 0, fault address 0x0000bef7. No errors were reported in SQL server logs or SQl agent logs. Does anyone know what this means? I tried to Google but no luck. Thanks in advance!

    Read the article

  • Luxottica Delivers an Elevated Customer Experience

    - by user801960
    Luxottica Group is a global leader in premium, luxury and sports eyewear with nearly 6,250 stores worldwide. The Group’s strong brand portfolio comprises ten house brands including Oakley, Ray-Ban, Percol and Arnette, and 20 licensed brands such as Bulgari, Chanel and Versace. In January at the Oracle Retail Exchange in New York, Luca Del Din, Luxottica Group’s IT Manager – Global Retail Demand and Integration and Irven Cassio, Digital Experience Director for Luxottica Retail introduced our REx delegates to their flagship Sunglass Hut store on Fifth Avenue. This store showcase provided the opportunity to explore this fantastic retail space incorporating the store’s interactive retail concept, the Sunglass Hut Social Sun station. I invite you to hear from Luca and Irven as we explore some of the innovative technologies and concepts that Luxottica deployed in this store and how these deliver an elevated customer experience.

    Read the article

  • Chinese footwear retailer Daphne on choosing Oracle Retail solutions

    - by user801960
    In January, leading Chinese footwear brand Daphne announced that they had chosen Oracle Retail as a solution provider for its expansion strategy. We were delighted to host Michael Hu, Chief Operations Officer of Daphne, at our exclusive Retail Exchange event in New York in January. We interviewed Michael about Daphne, Daphne's plans for the future and how Oracle's solutions will play a role in that future. The video is below. We are very grateful to Michael for taking the time to speak to us. To find out more about how Daphne is using Oracle, see the full press release HERE. To find out more about Oracle Retail solutions and how Oracle Retail can help your business, visit http://www.oracle.com/oms/retail/index.html

    Read the article

  • Getting bank account information from a bank and displaying on a website [closed]

    - by Ali Syed
    Hello I am looking for a way to get bank account information (transactions and balance) from a financial institution and display it on a website. The question is vague intentionally.... Everything is open. I haven't chosen a bank, serverside technology or front end technology. The idea is to have a script run automatically periodically (once or twice a day) and get the latest account information from the bank server automatically. Probably something in the direction of OFX (Open financial exchange), HBCI (home banking c.. interface), fnts or something like it. Even working over a closed source API is not out of question: Wesabe or Mint or something. Paypal is not an option because it won't work in India or Pakistan. cheers *Explanation: I have an exclusive small club. My members make irregular payments. These transactions should be online for all MEMBERS (with login) to see *

    Read the article

  • Detect frameworks and/or CMS utilized on websites in Firefox

    - by jkneip
    I'm redesigning the website for my academic library and am examining other sites to determine to identify the technologies used. Things like: Web frameworks Javascript frameworks Server-side technology Content management system Now I've had some real success in Firefox using plugins like Wappalyzer, Firebug, and the DOM Inspector. But some sites just don't display any of the info. I'm looking for using these tools, especially it seems it an enterprise-level CMS is being used. Does anyone know of any other tools to detect this kind of data? Also with Firebug & the DOM Inspector, there is a lot of info. displayed and I wondered if there was a way to derive the presence of server-side technologies, CMS's, etc. within certain elements of a web page? Also, if this question is more relevant to another Stack Exchange site, please let me know and I'll post it there instead. Much thanks, Jason

    Read the article

  • A Multi-Channel Contact Center Can Reduce Total Cost of Ownership

    - by Tom Floodeen
    In order to remain competitive in today’s market, CRM customers need to provide feature-rich superior call center experience to their customers across all communication channels while improving their service agent productivity. They also require their call center to be deeply integrated with their CRM system; and they need to implement all this quickly, seamlessly, and without breaking the bank. Oracle’s Siebel Customer Relationship Management (CRM) is the world’s leading application suite for automated customer-facing operations for Sales and Marketing and for managing all aspects of providing service to customers. Oracle’s Contact On Demand (COD) is a world-class carrier grade hosted multi-channel contact center solution that can be deployed in days without up-front capital expenditures or integration costs. Agents can work efficiently from anywhere in the world with 360-degree views into customer interactions and real-time business intelligence. Customers gain from rapid and personalized sales and service, while organizations can dramatically reduce costs and increase revenues Oracle’s latest update of Siebel CRM now comes pre-integrated with Oracle’s Contact On Demand. This solution seamlessly runs fully-functional contact center provided by a single vendor, significantly reducing your total cost of ownership. This solution supports Siebel 7.8 and higher for Voice and Siebel 8.1 and higher for Voice and Siebel CRM Chat.  The impressive feature list of Oracle’s COD solution includes full-control CTI toolbar with Voice, Chat, and Click to Dial features.  It also includes context-sensitive screens, automated desktops, built-in IVR, Multidimensional routing, Supervisor and Quality monitoring, and Instant Provisioning. The solution also ships with Extensible Web Services interface for implementing more complex business processes. Click here to learn how to reduce complexity and total cost of ownership of your contact center. Contact Ann Singh at [email protected] for additional information.

    Read the article

  • Free WebLogic Administration Cookbook

    - by Antony Reynolds
    Free WebLogic Admin Cookbook Packt Publishing are offering free copies of Oracle WebLogic Server 12c Advanced Administration Cookbook : http://www.packtpub.com/oracle-weblogic-server-12c-advanced-administration-cookbook/book  in exchange for a review either on your blog or on the title’s Amazon page. Here’s the blurb: Install, create and configure WebLogic Server Configure an Administration Server with high availability Create and configure JDBC data sources, multi data sources and gridlink data sources Tune the multi data source to survive database failures Setup JMS distributed queues Use WLDF to send threshold notifications Configure WebLogic Server for stability and resilience If you’re a datacenter operator, system administrator or even a Java developer this book could be exactly what you are looking for to take you one step further with Oracle WebLogic Server, this is a good way to bag yourself a free cookbook (current retail price $25.49). Free review copies are available until Tuesday 2nd July 2013, so if you are interested, email Harleen Kaur Bagga at: harleenb-AT-packtpub.com. I will be posting my own review shortly!

    Read the article

  • Is this form of cloaking likely to be penalised?

    - by Flo
    I'm looking to create a website which is considerably javascript heavy, built with backbone.js and most content being passed as JSON and loaded via backbone. I just needed some advice or opinions on likely hood of my website being penalised using the method of serving plain HTML (text, images, everything) to search engine bots and an js front-end version to normal users. This is my basic plan for my site: I plan on having the first request to any page being html which will only give about 1/4 of the page and there after load the last 3/4 with backbone js. Therefore non javascript users get a 'bit' of the experience. Once that new user has visited and detected to have js will have a cookie saved on their machine and requests from there after will be AJAX only. Example If (AJAX || HasJSCookie) { // Pass JSON } Search Engine server content: That entire experience of loading via AJAX will be stripped if a google bot for example is detected, the same content will be servered but all html. I thought about just allowing search engines to index the first 1/4 of content but as I'm considered about inner links and picking up every bit of content I thought it would be better to give search engines the entire content. I plan to do this by just detected a list of user agents and knowing if it's a bot or not. If (Bot) { //server plain html } In addition I plan to make clean URLs for the entire website despite full AJAX, therefore providing AJAX content to www.example.com/#/page and normal html to www.example.com/page is kind of our of the question. Would rather avoid the practice of using # when there are technology such as HTML 5 push state is around. So my question is really just asking the opinion of the masses on if it's likely that my website will be penalised? And do you suggest an alternative which avoids 'noscript' method

    Read the article

  • Crosstalk 2012

    - by David Dorf
    There are lots of industry conferences, but I consistently hear that Oracle Retail Crosstalk is one of the better ones, presumably because its focused is on helping Oracle Retail's customers interact, share insights, and exchange ideas.  If you're an Oracle Retail customer, I strongly encourage you to register and attend.  Here's why: Two days of fantastic speakers from companies like Daphne, Kohl's, Morrisons, Abercrombie & Fitch, Hot Topic, Talbots, and Disney to name a few. Held in the heart of Chicago with store tours on Michigan Avenue Special Interest discussions on merchandising, supply chain, planning, stores, and technology. Golf, fireworks at Navy Pier, and dancing at Soldier Field And best of all, the conference is free for qualified customers. So I certainly hope to see you there!

    Read the article

  • System Center Service Manager 2010 SP1 Resource Links

    - by Mickey Gousset
    System Center Service Manager is a new product that Microsoft released last year to handle incident/problem/change management.  Currently the latest version is System Center Service Manager SP1, and there is a Cumulative Update for SP1 that you should grab as well. A strong ecosystem is starting to spring up around this product, with tools and connectors that fill needs not build into the product.  To find the latest list of these items, you need to do to the SCSM 2010 Downloads page.  Here you can find a list of the latest tools and add-ons from Microsoft, as well as third-party vendors.  The Microsoft Exchange connector, and the Powershell Cmdlets are definitely worth it to download and install.

    Read the article

  • The first day of JavaOne is already over!

    - by delabassee
    In the past Sunday used to be a more relaxing day with ‘just’ some JavaOne activities going on. Sunday used to be a soft day to prepare yourself for an exhausting week. This is now over as JavaOne is expanding; Sunday is now an integral part of the conference. One of the side effect of this extra day is that some activities related to JavaOne and OpenWorld such as MySQL Connect are being push to start a day earlier on Saturday (can you spot the pattern here?). On the GlassFish front, Sunday was a very busy day! It started at the Moscone Center with the annual GlassFish Community Event where the Java EE 7 and GF 4 roadmaps were presented and discussed. During the event, different GlassFish users such as ZeroTurnaround (the JRebel guys), Grupo RBS and IDR Solutions shared their views on GF, why they like GF but also what could be improved. The event was also a forum for the GF community to exchange with some of the key Java EE / GlassFish Oracle Executives and the different GF team members. The Strategy keynote and the Technical keynote were held in the Masonic Auditorium later in the after-noon. Oracle executives have presented the plans for Java SE, Java FX and Java EE. As on-demand replays will be available soon, I will not summarize several hours of content but here are some personal takeaways from those keynotes. Modularity Modularity is a big deal. We know by now that Project Jigsaw will not be ready for Java SE 8 but in any case, it is already possible (and encouraged) to test Jigsaw today. In the future, Java EE plan to rely on the modularity features provided by Java SE, so Project Jigsaw is also relevant for Java EE developers. Shorter term, to cover some of the modular requirements, Java SE will adopt the approach that was used for Java EE 6 and the notion of Profiles. This approach does not define a module system per say; Profiles is a way to clearly define different subsets of Java SE to fulfill different needs (e.g. the full JRE is not required for a headless application). The introduction of different Profiles, from the Base profile (10mb) to the Full Profile (+50mb), has been proposed for Java SE 8. Embedded Embedded is a strong theme going forward for the Java Plaform. There is now a dedicated program : Java Embedded @ JavaOne Java by nature (e.g. platform independence, built-in security, ability easily talks to any back-end systems, large set of skills available on the market, etc.) is probably the most suited platform for the Internet of Things. You can quickly be up-to-speed and develop services and applications for that space just by using your current Java skills. All you need to start developing on ARM is a 35$ Raspberry Pi ARM board (25$ if you are cheap and can live without an ethernet connection) and the recently released JDK for Linux/ARM. Obviously, GlassFish runs on Raspberry Pi. If you wan to go further in the embedded space, you should take a look Java SE Embedded, an optimized, low footprint, Java environment that support the major embedded architectures (ARM, PPC and x86). Finally, Oracle has recently introduced Java Embedded Suite, a new solution that brings modern middleware capabilities to the embedded space. Java Embedded Suite is an optimized solution that leverage Java SE Embedded but also GlassFish, Jersey and JavaDB to deploy advanced value added capabilities (eg. sensor data filtering and) deeper in the network, closer to the devices. JavaFX JavaFX is going strong! Starting from Java SE 7u6, JavaFX is bundled with the JDK. JavaFX is now available for all the major desktop platforms (Windows, Linux and Mac OS X). JavaFX is now also available, in developer preview, for low end device running Linux/ARM. During the keynote, JavaFX was shown running on a Raspberry Pi! And as announced during the keynote, JavaFX should be fully open-sourced by the end of the year; contributions are welcome!. There is a strong momentum around JavaFX, it’s the ideal client solution for the Java platform. A client layer that works perfectly with GlassFish on the back-end. If you were not convince by JavaFX, it’s time to reconsider it! As an old Chinese proverb say “One tweet is worth a thousand words!” HTML5, Project Avatar and Java EE 7 HTML5 got a lot of airtime too, it was covered during the Java EE 7 section of the keynote. Some details about Project Avatar, Oracle’s incubator project for a TSA (Thin Server Architecture) solution, were diluted and shown during the keynote. On the tooling side, Project Easel running on NetBeans 7.3 beta was demo’ed, including a cool NetBeans debugging session running in Chrome! HTML 5, Project Avatar and Java EE 7 deserve separate posts... Feedback We need your feedback! There are many projects, JSRs and products cooking : GlassFish 4, Project Jigsaw, Concurrency Utilities for Java EE (JSR 236), OpenJFX, OpenJDK to name just a few. Those projects, those specifications will have a profound impact on the Java platform for the years to come! So if you have the opportunity, download, install, learn, tests them and give feedback! Remember, you can "Make the Future Java!" Finally, the traditional GlassFish Party at the Thirsty Bear concluded the first JavaOne day. This party is another place where the community can freely exchange with the GlassFish team in a more relaxed, more friendly (but sometime more noisy) atmosphere. Arun has posted a set of pictures to reflect the atmosphere of the keynotes and the GlassFish party. You can find more details on the others Java EE and GlassFish activities here.

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >