Search Results

Search found 127 results on 6 pages for 'destinations'.

Page 4/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • Install Control Center Agent on Oracle Application Server

    - by qianqian.wu
    Control Center Agent (CCA) The Control Center Agent is the OWB component that runs the Template Mappings in the Oracle Containers for J2EE (OC4J) server; also referred to as the J2EE Runtime. The Control Center Agent provides a Java-based runtime environment that can be installed on Oracle and non-Oracle database hosts. The Control Center Agent provides fundamental infrastructure for the heterogeneous, Code Template-based mapping support and Web services-related features of OWB in this release. In Oracle Warehouse Builder 11gR2 the Control Center Agent, by default will run in the built-in OC4J that is bundled in the Oracle Home. Besides that, you also have ability to install the Control Center Agent in an Oracle Application Server install. In this article, you will find step-by-step instructions how to install the Control Center Agent on an Oracle Application Server instance. The instructions cover the following tasks: Task 1: Install and Configure the Application Server Task 2: Deploy the Control Center Agent to the Application Server Task 3: Optional Configuration Tasks   Task 1: Install and Configure the Application Server Before configuring the Application Server, you need to install it from Oracle Application Server CD-ROM, or by downloading the installation program from Oracle Technology Network (OTN). Once the installation is completed, you are ready to configure the Application Server. The purpose of the configuration task is to make sure the Control Center Agent ear file can be deployed and runs in the Application Server successfully. The essential configuration tasks are outlined below: · Modify the OC4J Startup Script · Set up Control Center Agent Server Side Logging · Set up Audit Table Data Source · Copy ct_permissions.properties File · Set up Security Roles for Control Center Agent · Create JMS Queues · Install JDBC Drivers to OC4J Modify the OC4J Startup Script The OC4J startup script “opmn.xml” is located in Application Server configuration directory, $AS_HOME/opmn/conf. $AS_HOME stands for the root home directory of the application server. Open the file opmn.xml in a text editor, and alter the contents of the file as displayed in the following sample. You need to make sure that: The MaxPerSize is set to 128M. This is to ensure that you allocate enough PermGen space to OC4J to run Control Center Agent. This will prevent java.lang.OutOfMemoryError when running the agent. The Python.path sets the path for the Python library files used by the Control Center Agent: jython_lib.zip and jython_owblib.jar. These two files are in the $OWB_HOME/owb/lib/int directory, where $OWB_HOME is the directory where owb is installed. · The km_security_needed determines whether restrictions will be applied to the kinds of operating system commands allowed to be executed by the OWB Code Template script executed by Control Center Agent. Setting km_security_needed to “true” enforces such restriction while setting it to “false” removes such restrictions. Set up Control Center Agent Server Side Logging Ensure that you are in the Application Server configuration directory, $AS_HOME/j2ee/home/config. Open the file j2ee-logging.xml in a text editor and add the following lines to the log handler section. The jrt-internal-log-handler is the handler used by Control Center Agent runtime logger to create log files. Then add the following entry into the loggers section to create the logger for Control Center Agent runtime auditing. Set up Audit Table Data Source To enable Audit Table logging, a managed data source and connection pool need to be set up before Control Center Agent deployment. Ensure that you are in the Application Server configuration directory, $AS_HOME/j2ee/home/config. Open the file data-sources.xml in a text editor. Define the audit data source shown below in the file, <managed-data-source name="AuditDS" connection-pool-name="OWBSYS Audit   Connection Pool" jndi-name="jdbc/AuditDS"/> <connection-pool name="OWBSYS Audit Connection Pool">   <connection-factory factory-class="oracle.jdbc.pool.OracleDataSource"     user="owbsys_audit" password="owbsys_audit"     url="jdbc:oracle:thin:@//localhost:1521/ORCL"/> </connection-pool> Copy ct_permissions.properties File The ct_permissions.properties can be obtained from $OWB_HOME /owb/jrt/config/ directory. You need to copy the file to $AS_HOME/j2ee/home/config directory.This properties file takes effect when the setting km-security is set to true in Control Center Agent. By default the ALLOWED_CMD is commented out in ct_permissions.properties file. This prevents all system command from being invoked from scripts executed in Control Center Agent (when km-security is set to true). To allow certain system commands to be invoked, ALLOWED_CMD needs to be uncommented out, and the system commands (allowed to be invoked) need to be added to the ALLOWED_CMD. Set up Security Roles for Control Center Agent You can set up the Control Center Agent security roles through Oracle Enterprise Manager. In a web browser, navigate to Enterprise Manager Homepage (e.g. http://hostname:8889/em). 1. Log in using the oc4jadmin credentials. After the Cluster Topology page is loaded, click home (the OC4J instance). This takes you to the home page of the OC4J instance. On the OC4J home screen, click the Administration tab. On the Administration Tasks screen, expand Security. Click the task icon next to Security Providers. 2. On Security Providers page click on the button “Instance Level Security”. On Instance Level Security page, go to “Realms” tab. You will see a row for the default realm “jazn.com” in the results table. It has a “Roles” column and a “Users” column. Click on the number in “Roles” column. In the “Roles” page it will display all the roles available for the realm. Click on “Create” button to create a new role “OWB_J2EE_ EXECUTOR”. 3. On the Add Role screen, enter Name OWB_J2EE_EXECUTOR, and click OK. 4. Follow the same steps as before, and create a new role “OWB_J2EE_OPERATOR”. 5. Assign role “oc4j-administrators” and “OWB_J2EE_EXECUTOR” to the role “OWB_J2EE_OPERATOR” by moving these roles from “Available Roles” and click “OK” to save. 6. Go back to Instance Level Security page and create a new role “OWB_J2EE_ADMINISTRATOR”. 7. Assign roles “OWB_J2EE_ OPERATOR” and “OWB_J2EE_EXECUTOR” to the role “OWB_J2EE_ ADMINISTRATOR” by moving these roles from “Available Roles” and click “OK” to save. 8.Go back to Instance Level Security page. This time, click on the number in “Users” column for the realm “jazn.com”. In the “Users” page, it shows all the users defined for this realm. Locate the user “oc4jadmin” in the results table and click on it. 9. Assign the roles “OWB_J2EE_ADMINISTRATOR” and “oc4j-app-administrators” to this user by moving the role from the “Available Roles” selection box to “Selected Roles” box and click “Apply” to save. 10. Go back to Instance Level Security page and create a new role “OWB_INTERNAL_USERS”, assign no user or role to this role. Simply click “OK” to create this role. Now you have finished creating the security roles required for Control Center Agent. Create JMS Queues You need to create two JMS queues for Control Center Agent: owbQueue and abort_owbQueue. 1. Now go to OC4J home Page. On the OC4J home screen, click the Administration tab. On the Administration Tasks screen, expand Services and then expand Enterprise Messaging Service. Click the task icon next to JMS Destinations. 2. On JMS Destinations page, click “Create New” button to create a new JMS queue. On Add Destination page, choose “Queue” as Destination Type. Put “owbQueue” as Destination Name. Select “In Memory Persistence Only” as the Persistence Type and put “jms/owbQueue” as JNDI Location and click on “OK” to finish. 3. Follow the same instruction as above to create the owb_abortQueue. Now you have finished creating the JMS queues required for Control Center Agent. Install JDBC Drivers to OC4J In order to execute Code Templates using commercial databases other than Oracle, e.g. DB2, SQL Server etc, the corresponding jdbc driver files need to be added to $AS_HOME/j2ee/home/applib directory. 1. To install other JDBC drivers to OC4J, first obtain the .jar file containing the JDBC driver. All the external JDBC drivers .jar files can be found in the directory: $OWB_HOME/owb/lib/ext/. For DB2, the files needed are db2jcc.jar and db2jcc_license_cu.jar. For SQL Server the file is sqljdbc.jar. For sunopsis JDBC drivers, the file needed is snpsxmlo.jar. 2. Copy the required JDBC driver file into the directory $AS_HOME/j2ee/home/applib. Now you have finished the Application Server configuration. To make the configuration to take an effect, you need to restart the Application Server.   Task 2: Deploy the Control Center Agent to the Application Server Now you can deploy the Control Center Agent to the Application Server. In a web browser, navigate to Enterprise Manager Homepage (e.g. http://hostname:8889/em). 1. Log in using the oc4jadmin credentials. After the Cluster Topology page is loaded, click home (the OC4J instance). This takes you to the home page of the OC4J instance. On the OC4J home screen, click the Applications tab. Click Deploy to begin deploying Control Center Agent. 2. On the Deploy: Select Archive screen, under Archive, select Archive is present on local host. Upload the archive to the server where Application Server Control is running. Click Browse and locate the jrt.ear file in the $OWB_HOME/owb/jrt/applications directory. Under Deployment Plan, select Automatically create a new deployment plan. Click Next. 3. Wait for the ear file to be uploaded to Application Server. On the Deploy: Application Attributes screen, enter Application Name jrt, and Context Root jrt. Leave the other attributes at their default values. Click Next. 4. On Deploy: Deployment Settings screen, leave all attributes at their default values, and click Deploy. This will take about 1 minute or so and when the application is deployed successfully, a confirmation message will be displayed. Now the Control Center Agent is started automatically. Go back to OC4J home page and click on Applications tab to make sure the deployed application jrt is showing in the applications list.   Task 3: Optional Configuration Tasks The optional configuration tasks contain: · Secure Control Center Agent Web Service · Setting the PATH Environment Variable Secure Control Center Agent Web Service If you want to use JRTWebService with a secure website, you need to do the following steps, 1. Create a file “secure-web-site.xml” in the $AS_HOME/j2ee/home/config directory. The file can be obtained from $OWB_HOME/owb/jrt/config directory. A sample secure-web-site.xml is shown as below. We need to modify the “protocol” to “https”, and “secure” to “true”, also choose an port as the secure http port. Also we need to add the entry “ssl-config” in the file. Remember to use the absolute path for the key store file. 2. Modify the file “server.xml” that is located at $AS_HOME/j2ee/home/config directory. Then add the <web-site> element in the file for the secure-web-site. 3. Create a key store file “serverkeystore.jks” in the $AS_HOME/j2ee/home/config directory. The file can be obtained from $OWB_HOME/owb/jrt/config directory. After the three files are altered, restart the application server. Now you can access the JRTWebService in SSL way through https://hostname:4443/jrt/webservice. Setting the PATH Environment Variable Sometimes, some system commands such as linux ls, sh etc, can not be executed successfully during the script execution due to they are not found in PATH. To ensure they work normally, you can setup the environment variable PATH. Let’s navigate to the Enterprise Manager Homepage. 1. Go to OC4J home screen and click the Administration tab. Expand Administration Tasks, then expand Properties. Click the task icon next to Server Properties. 2. On the Server Properties screen, scroll down to Environment Variables section. Under Environment Variables, click Add Another Row. Enter PATH in Name, and fill Value with directories that contain the system commands. Click Apply.   After you work through this article, I believe you have developed a deeper understanding of the Control Center Agent installation process, and you can apply this knowledge in other installation plan such as Control Center Agent installation on Standalone OC4J.

    Read the article

  • Overwrite msg in mirth

    - by Ryan H
    I have two destinations now and the first calls a SOAP webservice. I want to take the response of that destination by: msg = new XML(responseMap.get('Destination1').getMessage()); and convert it to a mutable XML object. Doing: logger.error(msg); <S:Body><PRPA_IN201306UV02> ... </PRPA_IN201306UV02></S:Body> Shows the valid msg as I want it, but when I do: msg['S:Body'] it returns nothing. Any suggestions would help.

    Read the article

  • SQL group and order

    - by John Lambert
    I have multiple users with multiple entries recording times they arrive at destinations Somehow, with my select query I would like to only show the most recent entries for each unique user name. Here is the code that doesn't work: SELECT * FROM $dbTable GROUP BY xNAME ORDER BY xDATETIME DESC This does the name grouping fine, but as far as showing ONLY their most recent entry, is just shows the first entry it sees in the SQL table. I guess my question is, is this possible? Here is my data sample: john 7:00 chris 7:30 greg 8:00 john 8:15 greg 8:30 chris 9:00 and my desired result should only be john 8:15 chris 9:00 greg 8:30

    Read the article

  • Convert C# event handling to VB.NET

    - by Quandary
    Question: The following C# code: public event RemoteAPI.NotifyCallback Notify { add { s_notify = value; } remove { Console.WriteLine("TODO : Notify remove."); } } How do I convert this to VB.NET ? It implements an interface: Public Interface ICallsToServer ''' <summary> ''' Function to call the server from the client ''' </summary> ''' <param name="n">Some number</param> ''' <returns>Some interesting text</returns> Function SomeSimpleFunction(ByVal n As Integer) As String ''' <summary> ''' Add or remove callback destinations on the client ''' </summary> Event Notify As NotifyCallback End Interface The VB.NET automatically generated code was: Public Event Notify(ByVal s As String) Implements RemoteAPI.ICallsToServer.Notify

    Read the article

  • Locking down multiple sites in Sitecore

    - by adam
    Hi I have two sites running under one Sitecore 6 installation. The home nodes of the sites are as such: /sitecore/content/Home /sitecore/content/Careers Assuming the primary site is at domain.com, the careers site can be accessed at careers.domain.com. My problem is that, by prefixing the uri with /sitecore/content/, any sitecore item can be accessed by either (sub)domain. For example, I can get to: http://domain.com/sitecore/content/careers.aspx (should be under careers.domain.com) http://careers.domain.com/sitecore/content/home/destinations.aspx (should be under domain.com). I know I can redirect these urls (using IIS7 Redirects or ISAPIRewrite) but is there any way to 'lock' Sitecore down to only serve items under the configured home node for that domain? Thanks, Adam

    Read the article

  • Can you change/redirect a django form's function by passing in your own function?

    - by Derek
    I'm dealing with django-paypal and want to change the button src images. So I went the the conf.py file in the source and edited the src destination. However, I really want to leave the source alone, and I noticed that the class PayPalPaymentsForm(forms.Form): has def get_image(self): return { (True, self.SUBSCRIBE): SUBSCRIPTION_SANDBOX_IMAGE, (True, self.BUY): SANDBOX_IMAGE, (True, self.DONATE): DONATION_SANDBOX_IMAGE, (False, self.SUBSCRIBE): SUBSCRIPTION_IMAGE, (False, self.BUY): IMAGE, (False, self.DONATE): DONATION_IMAGE, }[TEST, self.button_type] which handles all the image src destinations. Since changing this def in the source is worse than changing conf, I was wondering if there was a way to pass in customized defs you make like passing in initial arguments in forms? This way no source code is changed, and I can customize the get_image def as much as I need. passing in def something like this? def get_image(self): .... .... paypal = { 'amount': 10, 'item_name': 'test1', 'item_number': 'test1_slug', # PayPal wants a unique invoice ID 'invoice': str(uuid.uuid4()), } form = PayPalPaymentsForm(initial=paypal, get_image) Thanks!

    Read the article

  • vb.net add text to form without interaction

    - by user228058
    I have a winform project which lists all the files in a specified folder. It allows the user to select a new destination for each file, and when the user has chosen the destinations for all files that he would like to be moved, it moves the files, one by one. My next step is, I need to display a confirm form when the files are being moved, and add each file's name and destination to the confirm form as it is being moved. My question is: How can I add more text to the confirm form's controls after I already loaded it (using confirm.showdialog() from my other form, without any user interaction? I imagine that I need to do it from the original form, because it needs to display each one when it starts to move that file, but I'm open to any suggestions:) TIA

    Read the article

  • Django template tag basic question

    - by ninja123
    It looks like this template tag works like a charm for most people: http://blog.localkinegrinds.com/2007/09/06/digg-style-pagination-in-django/ For some reason I get this error: Caught an exception while rendering: 'is_paginated' I use this template tag in my template like so: {% load digg_paginator %} {% digg_paginator %} Where digg_paginator.py is in my app/templatetags folder and the included template context digg_paginator.html is in my app/templates folder. The queryset that needs pagination is called 'destinations'. If i just specify {% digg_paginator %}, how does it know what variable to paginate?? I feel I am missing something important here or just plain stupid :P Someone please help, or explain to me how this should be done. Thanks

    Read the article

  • How can my application submit salesforce case with custom fields?

    - by Aaron
    I have an application that submits bugs for multiple customers to multiple destinations. I would like this application to include salesforce as a destination. I have web reference to the Enterprise WSDL and I am able to create cases for salesforce. This works great until I want to include user defined custom fields as part of the case. The custom fields are strongly typed with a __c prefix for my instance of salseforce. But I will not have the liberty of updating my web service with my customers instance of salesforce. How can I submit a salesforce case with custom fields that are not defined in the WSDL? Should I not be using the Enterprise WSDL?

    Read the article

  • Can not make a request to google map

    - by Eme Emertana
    Hi I am making a restful request to google map, but I run into following error; java.io.IOException: Server returned HTTP response code: 400 for URL: http://maps.googleapis.com/maps/api/distancematrix/xml?origins=Washington, DC USA&destinations=Los+Angeles+CA+USA&mode=driving&sensor=false at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1436) at java.net.URLConnection.getContent(URLConnection.java:688) I believe its making a correct connection as I can get the correct response by copying the above URL into my browser, I am wondering why I am getting 400 error code in my console and I dont get the correct response when java is sending the request.

    Read the article

  • Top 5 Sites and Activities in San Francisco to Experience During Oracle OpenWorld

    - by kgee
    While Oracle OpenWorld may provide solutions and information on topics like how to simplify your IT, the importance of cloud, and what types of storage may satisfy your enterprise needs, who is going to tell you more about San Francisco? Here are some suggested sites and activities to experience after OpenWorld that aren’t too far from the Moscone Center. It is recommended to take a cab for the sake of time, but the 6 square miles that make up San Francisco will make for a quick trek to any of the following destinations: The Golden Gate BridgeAn image often associated with San Francisco, this bridge is one of the most impressive in the world. Take a walk across it, or view it from nearby Crissy Field, it is a sight that floors even the most veteran of San Franciscans. The Ferry BuildingLocated at the end of Market Street in the Embarcadero, the Ferry Building once served as a hub of water transport and trade. The building has a bay front view and an array of food choices and restaurants. It is easily accessible via the Muni, BART, trolley or by cab. It is a must-see in San Francisco, and not too far from the Moscone Center. Ride the Trolley to the CastroFor only $2, you can get go back in history for a moment on the Trolley. Take the F-line from the Embarcadero and ride it all the way to the Castro district. During the ride, you will get an overview of the landscape and cultures that are prevalent in San Francisco, but be wary that some areas may beg for an open mind more than others. Golden Gate ParkWhen you tire of the concrete jungle, the lucky part of being in San Francisco is that you can escape to a natural refuge, this park being one of the favorites. This park is known for its hiking trails, cultural attractions, monuments, lakes and gardens. It is one good reason to bring your sneakers to San Francisco, and is also a great place to picnic. Please be wary that it is easy to get lost, and it is advisable to bring a map (just in case) if you go. Haight AshburyFor a complete change of scenery, Haight Ashbury is known as one of the places hippies used to live and the location of "The Summer of Love." It is now a more affluent neighborhood with boutique shops and the occasional drum circle. While it may be perceived as grungy in certain spots, it is one of the most photographed places in San Francisco and an integral part of San Franciscan history.

    Read the article

  • View the Real Links Behind Shortened URLs in Chrome

    - by Asian Angel
    When you encounter shortened URLs there is always that worry in the back of your mind about where they really lead to. Now you can get a “sneak peak” at the real links behind those URLs with the View Thru extension for Google Chrome. The URL Shortening services officially supported at this time are: bit.ly, cli.gs, ff.im, goo.gl, is.gd, nyti.ms, ow.ly, post.ly, su.pr, & tinyurl.com. Before When you encounter a shortened URL you are pretty much on your own in deciding whether to trust that link or not. It would really be nice if you could just hover your mouse over those links and know where they will lead ahead of time. After Once you have the extension installed you are ready to access that link viewing goodness. Please note that you will need to reload any pages that were open prior to installing the extension. For our first example we chose a shortened URL from “bit.ly”. As you can see the entire link behind the shortened URL is displayed very nicely…no hidden surprises there! Note: There are no options to worry with for the extension. Another perfect result for the “goo.gl URL” shown below. View Thru will certainly remove a lot of the stress related to clicking on shortened URLs. Bonus Find Just out of curiosity we looked for a shortened URL not listed as being officially supported at this time. We found one with the “http://nyti.ms/” domain and View Thru showed the link perfectly…so be sure to give it a try on other services too. Conclusion If you worry about where a shortened URL will really lead you then the View Thru extension can help alleviate that stress. Links Download the View Thru extension (Google Chrome Extensions) Similar Articles Productive Geek Tips See Where Shortened URLs “Link To” in Your Favorite BrowserVerify the Destinations of Shortened URLs the Easy WayCreate Shortened goo.gl URLs in Google Chrome the Easy WayCreate Shortened goo.gl URLs in Your Favorite BrowserAccess Google Chrome’s Special Pages the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job?

    Read the article

  • ArchBeat Link-o-Rama for 2012-05-31

    - by Bob Rhubart
    Eclipse DemoCamp - June 2012 - Redwood Shores, CA wiki.eclipse.org Oracle HQ 10 Twin Dolphin Dr. Redwood Shores, CA Presentations: The evolution of Java persistence, Doug Clarke, EclipseLink Project Lead, Oracle Eclipse Project Sapphire, Konstantin Komissarchik, Sapphire Project Lead, Oracle Developing Rich ADF Applications with Java EE, Greg Stachnick, Oracle Leveraging OSGi In The Enterprise, Kamal Muralidharan, Lead Engineer, eBay NVIDIA Nsight Eclipse Edition, Goodwin (Tech lead - Visual tools), Eugene Ostroukhov (Senior engineer – Visual tools)   BI Architecture Master Class for Partners - Oracle Architecture Unplugged blogs.oracle.com June 21, 2012 This workshop will be highly interactive and is aimed at Oracle OPN member partners who are IT Architects and BI+W specialists. This will be a highly interactive session and does not involve slide presentations or product feature details, it addresses IT-Architectural issues and considerations for the IT-Architect Community. 2012 Oracle Fusion Middleware Innovation Awards - Win a FREE Pass to Oracle OpenWorld 2012 in SF www.oracle.com Share your use of Oracle Fusion Middleware solutions and how they help your organization drive business innovation. You just might win a free pass to Oracle Openworld 2012 in San Francisco. Deadline for submissions in July 17, 2012. IT professionals: Very much the time to change our approach | Andy Mulholland www.capgemini.com This final post by retiring Capgemini CTO blogger Andy Mulholland is a must-read for anyone in IT. 10 Great WebCenter Sites Resources (FatWire) | John Brunswick www.johnbrunswick.com John Brunswick shares "some good resources that span the WebCenter Sites and FatWire brands, to get a consolidated list of helpful destinations for ongoing education." Cloning a WebCenter Portal Managed Server | Maiko Rocha blogs.oracle.com WebCenter and ADF A-Team blogger Maiko Rocha shows how to easily add a new managed server to a single-node domain to make it a cluster. Sorting and Filtering By Model-Based LOV Display Value | Steven Davelaar blogs.oracle.com How-to by WebCenter and ADF A-Team blogger Steven Davelaar. Designing and Developing Cross-Cutting Features | Stephen Rylander www.infoq.com Architects are often tasked with a business feature that must span systems. This article by will provide strategies to handle the change and guide your thinking about separating system boundaries and what that means for your technical design. Thought for the Day "A committee is a group of people who individually can do nothing, but who, as a group, can meet and decide that nothing can be done." — Fred Allen (5/31/1894 – 3/17/1956) Source: Brainy Quote

    Read the article

  • Test your internet connection - Emtel Mobile Internet

    After yesterday's report on Emtel Fixed Broadband (I'm still wondering where the 'fixed' part is), I did the same tests on Emtel Mobile Internet. For this I'm using the Huawei E169G HSDPA USB stick, connected to the same machine. Actually, this is my fail-safe internet connection and the system automatically switches between them if a problem, let's say timeout, etc. has been detected on the main line. For better comparison I used exactly the same servers on Speedtest.net. The results Following are the results of Rose Hill (hosted by Emtel) and respectively Frankfurt, Germany (hosted by Vodafone DE): Speedtest.net result of 31.05.2013 between Flic en Flac and Rose Hill, Mauritius (Emtel - Mobile Internet) Speedtest.net result of 31.05.2013 between Flic en Flac and Frankfurt, Germany (Emtel - Mobile Internet) As you might easily see, there is a big difference in speed between national and international connections. More interestingly are the results related to the download and upload ratio. I'm not sure whether connections over Emtel Mobile Internet are asymmetric or symmetric like the Fixed Broadband. Might be interesting to find out. The first test result actually might give us a clue that the connection could be asymmetric with a ratio of 3:1 but again I'm not sure. I'll find out and post an update on this. It depends on network coverage Later today I was on tour with my tablet, a Samsung Galaxy Tab 10.1 (model GT-P7500) running on Android 4.0.4 (Ice Cream Sandwich), and did some more tests using the Speedtest.net app. The results are actually as expected and in areas with better network coverage you will get better results after all. At least, as long as you stay inside the national networks. For anything abroad, it doesn't really matter. But see for yourselves: Speedtest.net result of 31.05.2013 between Cascavelle and servers in Rose Hill, Mauritius (Emtel - Mobile Internet), Port Louis, Mauritius and Kuala Lumpur, Malaysia It's rather shocking and frustrating to see how the speed on international destinations goes down. And the full capability of the tablet's integrated modem (HSDPA: 21 Mbps; HSUPA: 5.76 Mbps) isn't used, too. I guess, this demands more tests in other areas of the island, like Ebene, Pailles or Port Louis. I'll keep you updated... The question remains: Alternatives? After the publication of the test results on Fixed Broadband I had some exchange with others on Facebook. Sadly, it seems that there are really no alternatives to what Emtel is offering at the moment. There are the various internet packages by Mauritius Telecom feat. Orange, like ADSL, MyT and Mobile Internet, and there is Bharat Telecom with their Bees offer which is currently limited to Ebene and parts of Quatre Bornes.

    Read the article

  • Retail in New York - a walk down 5th Avenue

    - by sarah.taylor(at)oracle.com
    It's the week of the NRF Big Show and all eyes in the retail industry are on New York. The Big Apple is famous for Big Retail -with a proliferation of incredibly iconic stores. The environment is exciting and familiar even to people visiting this small island for the first time. Most of us have travelled down Fifth Avenue watching movies and TV even if we have never set foot on American soil. I find it one of the most exciting retail cities in the world and I am thrilled this year to be here with so many of Oracle's International retail customers who are joining us for the Retail Exchange. The Oracle program brings retailers from all over the planet together to share ideas and be inspired by New York retail and the NRF event. The show celebrates its 100th year in 2011 and New York itself has been recognized globally as the capital of innovative retail for just as long.  Fifth Avenue is where many global brands have placed their flagship stores, and businesses are in constant competition to set themselves apart from their competitors - both in the store and from the street.  These flag ship retail destinations present what today's customers are finding most exciting and delightful about retail. For the tourist market, they may only visit these stores once, but the impression that a trip to a flagship store leaves with a customer can last a lifetime.  One of the stores that is currently turning heads on Fifth Avenue is Hollister, sister brand to Abercrombie and Fitch, which has filled its shop front with a massive live video (and audio) feed of surfers on the beach in California.  To complete the effect, they also have troughs of water in front of the video screens to bring the sea to the street.  And this isn't the only kind of surfing that retailers are considering today and multi-channel retail is a hot topic that all of the retailers joining the Retail Exchange are considering.   The rest of the world looks to the brands along Fifth Avenue for inspiration - how they take advantage of new opportunities, how they set themselves apart from their competitors and how they keep their products fresh and desirable. With these inspiring pioneers in New York, it's little wonder that NRF's Big Show is so popular, and that New York is viewed as one of the retail capitals of the world. It is a pleasure to be here with so many of the world's greatest international retailers.

    Read the article

  • How do I make my internal dns forward requests to a given server

    - by ankimal
    We have a DNS server internally that looks up IP addresses for all internal hosts and connects to root dns servers for all other domains (the rest of the internet). Here is my config options { listen-on port 53 { 127.0.0.1;any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query {192.168.1.0/24; 127.0.0.1; }; recursion yes; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; view “internal” { // What the home network will see match-clients { 127.0.0.1;any; }; match-destinations { 127.0.0.1;any; }; recursion yes; zone "." IN { type hint; file "named.ca"; }; include "internal_zones.conf"; }; We need to tweak this to go to our ISPs dns, x.y.z.w instead of the root dns servers if the host cannot be resolved internally. Config: Fedora 10/Bind 9.5.2

    Read the article

  • RDP problem with Vista and Windows 7 destination

    - by MadBison
    I use a server a home to host a bunch of concurrently running Hyper-V VM's with different OS's and software for testing. I have Vista on the laptop, all latest SP's and patches. The server is Server 2008 R2, fully patched. The guests are a mix of XP, Vista, Server 2008 and Windows 7. If I connect to the Win XP or Server 2008 guest using RDP, it is always good. Very quick, no speed issues. If I connect to the Vista or Win 7 guests, the response time is so slow it is unusable. Usually 6 or 8 seconds, and at times it is to long to measure! This happens from both the laptop running Vista, and the server running Server 2008 R2. Does anyone know what the issue is with RDP on Vista and Windows 7 destinations? I did read this: http://blog.tmcnet.com/blog/tom-keating/microsoft/remote-desktop-slow-problem-solved.asp and that is not the problem I have applied that change to all PC's.

    Read the article

  • SquidGuard and Active Directory: how to deal with multiple groups?

    - by Massimo
    I'm setting up SquidGuard (1.4) to validate users against an Active Directory domain and apply ACLs based on group membership; this is an example of my squidGuard.conf: src AD_Group_A { ldapusersearch ldap://my.dc.name/dc=domain,dc=com?sAMAccountName?sub?(&(sAMAccountName=%s)(memberOf=cn=Group_A%2cdc=domain%2cdc=com)) } src AD_Group_B { ldapusersearch ldap://my.dc.name/dc=domain,dc=com?sAMAccountName?sub?(&(sAMAccountName=%s)(memberOf=cn=Group_B%2cdc=domain%2cdc=com)) } dest dest_a { domainlist dest_a/domains urllist dest_b/urls log dest_a.log } dest dest_b { domainlist dest_b/domains urllist dest_b/urls log dest_b.log } acl { AD_Group_A { pass dest_a !dest_b all redirect http://some.url } AD_Group_B { pass !dest_a dest_b all redirect http://some.url } default { pass !dest_a !dest_b all redirect http://some.url } } All works fine if an user is member of Group_A OR Group_B. But if an user is member of BOTH groups, only the first source rule is evaluated, thus applying only the first ACL. I understand this is due to how source rule matching works in SquidGuard (if one rule matches, evaluation stops there and then the related ACL is applied); so I tried this, too: src AD_Group_A_B { ldapusersearch ldap://my.dc.name/dc=domain,dc=com?sAMAccountName?sub?(&(sAMAccountName=%s)(memberOf=cn=Group_A%2cdc=domain%2cdc=com)) ldapusersearch ldap://my.dc.name/dc=domain,dc=com?sAMAccountName?sub?(&(sAMAccountName=%s)(memberOf=cn=Group_B%2cdc=domain%2cdc=com)) } acl { AD_Group_A_B { pass dest_a dest_b all redirect http://some.url } [...] } But this doesn't work, too: if an user is member of either one of those groups, the whole source rule is matched anyway, so he can reach both destinations (which is of course not what I want). The only solution I found so far is creating a THIRD group in AD, and assign a source rule and an ACL to it; but this setup grows exponentially with more than two or three destination sets. Is there any way to handle this better?

    Read the article

  • Biztalk 2009 logshipping with SQL 2008

    - by Manjot
    Hi, I am setting up biztalk logshipping for Biztalk 2009 database. Following http://msdn.microsoft.com/en-us/library/aa560961.aspx article, I am doing the following to setup biztalk logshipping on destination server: Enable Ad-hoc queries by: sp_configure 'show advanced options',1 go reconfigure go sp_configure 'Ad Hoc Distributed Queries',1 go reconfigure go sp_configure 'show advanced options',0 go reconfigure go Execute LogShipping_Destination_Schema & LogShipping_Destination_Logic in master on destinations server Run: exec bts_ConfigureBizTalkLogShipping @nvcDescription = '', @nvcMgmtDatabaseName = '', @nvcMgmtServerName = '', @SourceServerName = null, -- null indicates that this destination server restores all databases @fLinkServers = 1 -- 1 automatically links the server to the management database When I run this I am receiving the following error: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. After some research I found some info : Usually this error means that the SQL Server Service Principal Name (SPN) was not configured, and NTLM was not being used as an authentication mechanism. SQl services are runing under different domain accounts. So, I asked the domain admin to create SPNs for the servers, SQL service accounts for beoth source and destination using name and FQDN. enabled computer name and service accounts for delegation. When I run the following: select * from sys.dm_exec_connections I get the the same error: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON' Any help please?

    Read the article

  • RDP problem with Vista and Windows 7 destination

    - by MadBison
    I use a server a home to host a bunch of concurrently running Hyper-V VM's with different OS's and software for testing. I have Vista on the laptop, all latest SP's and patches. The server is Server 2008 R2, fully patched. The guests are a mix of XP, Vista, Server 2008 and Windows 7. If I connect to the Win XP or Server 2008 guest using RDP, it is always good. Very quick, no speed issues. If I connect to the Vista or Win 7 guests, the response time is so slow it is unusable. Usually 6 or 8 seconds, and at times it is to long to measure! This happens from both the laptop running Vista, and the server running Server 2008 R2. Does anyone know what the issue is with RDP on Vista and Windows 7 destinations? I did read this: http://blog.tmcnet.com/blog/tom-keating/microsoft/remote-desktop-slow-problem-solved.asp and that is not the problem I have applied that change to all PC's.

    Read the article

  • File Sync Solution for Batch Processing (ETL)

    - by KenFar
    I'm looking for a slightly different kind of sync utility - not one designed to keep two directories identical, but rather one intended to keep files flowing from one host to another. The context is a data warehouse that currently has a custom-developed solution that moves 10,000 files a day, some of which are 1+ gbytes gzipped files, between linux servers via ssh. Files are produced by the extract process, then moved to the transform server where a transform daemon is waiting to pick them up. The same process happens between transform & load. Once the files are moved they are typically archived on the source for a week, and the downstream process likewise moves them to temp then archive as it consumes them. So, my requirements & desires: It is never used to refresh updated files - only used to deliver new files. Because it's delivering files to downstream processes - it needs to rename the file once done so that a partial file doesn't get picked up. In order to simplify recovery, it should keep a copy of the source files - but rename them or move them to another directory. If the transfer fails (network down, file system full, permissions, file locked, etc), then it should retry periodically - and never fail in a non-recoverable way, or a way that sends the file twice or never sends the file. Should be able to copy files to 2+ destinations. Should have a consolidated log so that it's easy to find problems Should have an optional checksum feature Any recommendations? Can Unison do this well?

    Read the article

  • Configuring nginx to check for hard files in only a few directories,

    - by Evan Carroll
    For a node.js project I'm doing, I have a tree like this. +-- public ¦   +-- components ¦   +-- css ¦   +-- img +-- routes +-- views Essentially, I have the root to be set to public. I want all requests destined to /components/ /css/ /img/ To check to see if their appropriate destinations exist on disk. However, I don't want requests to other directories to even run an IO operation, /foo/asdf /bar /baz/index.html None of those should result in the disk being touched. I have a stansa that does the proxy to node.js, location @proxy { internal; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_pass http://localhost:3030; proxy_redirect off; } I just would like to know how to arrange this. My problem would be easily solved if try_files took a single argument, but it always wants a file first. location /components/ { try_files $uri, @proxy } location /css/ { try_files $uri, @proxy } location /img/ { try_files $uri, @proxy } However, there is nothing that I can find that will give me, location / { try_files @proxy } How do I get the effect I want?

    Read the article

  • Why would my network slow down?

    - by monkthemighty
    The network at my work has about 40 computers on it and a quite a few printers. When there are a lot of people working the network will be slow. I can test the ping between my computer and the router and it will keep rising, sometimes to the point that it times out. The router we are using is running Ubuntu on a atom processor and it has 4gb of ram. When the network slows the process Ksoftirq will be using most if not all of the processing power. I have found that Ksoftirq is a process that handles irq requests. Also when the network slows down I have captured packets from the router and using tshark and looked at it using wireshark on my laptop. With the capture show a lot of packets with TCP Dup ACK and TCP Retransmissions. The destinations of the TCP Dup and TCP retransmissions are to most of the computers on the network but there are some that are far more than others. What could this problem be caused by?

    Read the article

  • Closer look at the SOA 12c Feature: Oracle Managed File Transfer

    - by Tshepo Madigage-Oracle
    The rapid growth of cloud-based applications in the enterprise, combined with organizations' desire to integrate applications with mobile technologies, is dramatically increasing application integration complexity. To meet this challenge, Oracle introduced Oracle SOA Suite 12c, the latest version of the industry's most complete and unified application integration and SOA solution. With simplified cloud, mobile, on-premises, and Internet of Things (IoT) integration capabilities, all within a single platform, Oracle SOA Suite 12c helps organizations speed time to integration, improve productivity, and lower TCO. To extend its B2B solution capabilities with Oracle SOA Suite 12c, Oracle unveiled Oracle Managed File Transfer, an integrated solution that enables organizations to virtually eliminate file transfer complexities. This allows customers to load data securely into Oracle Cloud applications as well as third-party cloud or partner applications. Oracle Managed File Transfer (Oracle MFT) enables secure file exchange and management with internal departments and external partners. It protects against inadvertent access to unsecured files at every step in the end-to-end transfer of files. It is easy to use especially for non technical staff so you can leverage more resources to manage the transfer of files. The extensive reporting capabilities allow you to get quick status of a file transfer and resubmit it as required. You can protect data in your DMZ by using the SSH/FTP reverse proxy. Oracle Managed File Transfer can help integrate applications by transferring files between them in complex use case patterns. Standalone: Transferring files on its own using embedded FTP and sFTP servers and the file systems to which it has access. SOA Integration: a SOA application can be the source or target of a transfer. A SOA application can also be the common endpoint for the target of one transfer and the source of another. B2B Integration: B2B application can be the source or target of a transfer. A B2B application can also be the common endpoint for the target of one transfer and the source of another. Healthcare Integration:  Healthcare application can be the source or target of a transfer. A Healthcare application can also be the common endpoint for the target of one transfer and the source of another. Oracle Service Bus (OSB) integration: OMT can integrate with Oracle Service Bus web service interfaces. OSB interface can be the source or target of a transfer. An Oracle Service Bus interface can also be the common endpoint for the target of one transfer and the source of another. Hybrid Integration: can be one participant in a web of data transfers that includes multiple application types. Oracle Managed File Transfers has four user roles: file handlers, designers, monitors, and administrators. File Handlers: - Copy files to file transfer staging areas, which are called sources. - Retrieve files from file transfer destinations, which are called targets. Designers: - Create, read, update and delete file transfer sources. - Create, read, update and delete file transfer targets. - Create, read, update and delete transfers, which link sources and targets in complete file delivery flows. - Deploy and test transfers. Monitors: - Use the Dashboard and reports to ensure that transfer instances are successful. - Pause and resume lengthy transfers. - Troubleshoot errors and resubmit transfers. - View artifact deployment details and history. - View artifact dependence relationships. - Enable and disable sources, targets, and transfers. - Undeploy sources, targets, and transfers. - Start and stop embedded FTP and sFTP servers. Administrators: - All file handler tasks - All designer tasks - All monitor tasks - Add other users and determine their roles - Configure user directory permissions - Configure the Oracle Managed File Transfer server - Configure embedded FTP and sFTP servers, including security - Configure B2B and Healthcare domains - Back up and restore the Oracle Managed File Transfer configuration - Purge transferred files and instance data - Archive and restore instance data and payloads - Import and export metadata You will find all the related information about SOA 12.1.3. Oracle Manages File Transfer OMT in the documentation: Using Oracle Manages File Transfer Resources and links: Oracle Unveils Oracle SOA Suite 12c Oracle Managed Files Transfer Oracle Managed Files Transfer SOA 12c White Paper For further enquiries don't hesitate to contact us at [email protected] and join our Partner Webcast on Oracle SOA Suite 12c

    Read the article

  • Failed to install GRUB on a separate '/boot' partition on a fake RAID 0 (12.04LTS)

    - by gerben
    I'm having some problems getting GRUB configured for Ubuntu 12.04LTS on a fake RAID 0. I can either get the GRUB rescue prompt at startup, or just a GRUB prompt but I cannot boot to Ubuntu manually. How can I configure the GRUB to actually use the Ubuntu install? The steps taken: Installing Ubuntu on fake raid The Ubuntu installer cannot install Ubuntu on the drive. After defining the partitions to use it fails with "Error: ???", pressing OK terminates the installer. Therefore, I used GParted to configure the partitions: /dev/mapper/sil_agadaccfacbg : (the RAID configuration, created partition): /dev/mapper/sil_agadaccfacbg1:ext2, 200MiB, (with 'boot' flag) /dev/mapper/sil_agadaccfacbg3:ext2, 67.75GiB, (which will contain Ubuntu) /dev/mapper/sil_agadaccfacbg2:extended, 1.00GiB, (for swap) Contains: /dev/mapper/sil_agadaccfacbg5: unknown Because of the fake-RAID, I already mounted the destination partitions before running the Ubuntu installer: > mkdir /mnt/boot > sudo mount /dev/mapper/sil_agadaccfacbg1 /mnt/boot > mkdir /mnt/ubuntu > sudo mount /dev/mapper/sil_agadaccfacbg3 /mnt/ubuntu In the installer I chose the following partition usage: /dev/mapper/sil_agadaccfacbg1 ext2, mount at /boot (209MB) /dev/mapper/sil_agadaccfacbg3 ext2, mount at / (72751MB) /dev/mapper/sil_agadaccfacbg5 swap Device for boot loader installation: /dev/mapper/sil_agadaccfacbg, linux device-mapper (striped) (74.0GB) This will install Ubuntu, but will fail to install GRUB (it seems to use /dev/sda no matter which one I choose) Installing GRUB with dpkg-reconfigure I followed this guide, but adapted it for two partitions: sudo mount /dev/mapper/sil_agadaccfacbg3 /mnt/ubuntu sudo mount --bind /dev /mnt/ubuntu/dev sudo mount --bind /proc /mnt/ubuntu/proc sudo mount --bind /sys /mnt/ubuntu/sys sudo mount /dev/mapper/sil_agadaccfacbg1 /mnt/boot sudo mount --bind /boot /mnt/boot sudo chroot /mnt/ubuntu dpkg-reconfigure grub-pc However, it does not ask where to install GRUB (I should choose /dev/mapper/sil_agadaccfacbg somewhere..) After reboot I get the GRUB rescue prompt with message no such device Installing GRUB with grub-install After the same mount commands as above, I continued with: > sudo grub-install --root-directory=/mnt/boot /dev/mapper/sil_agadaccfacbg This gives the following message: /usr/sbin/grub-probe: error: cannot find a device for /mnt/boot/boot/grub (is /dev mounted?) It does succeed when mounting just the boot partition : sudo mount /dev/mapper/sil_agadaccfacbg1 /mnt sudo grub-install --root-directory=/mnt/ /dev/mapper/sil_agadaccfacbg This finishes with: Installation finished. No error reported. After reboot I get the GRUB console, with welcome text. Attempting to manually start Ubuntu: ls (hd0) (hd0,msdos3) : (Ubuntu install partition) (hd0,msdos1) : (Ubuntu boot partition) (hd1) (hd1,msdos1) : (Ubuntu live USB) ls (hd0,msdos3)/ contains: - vmlinuz - lib/ - tmp/ - initrd.img - mnt/ - var/ - proc/ - boot/ - root/ - etc/ - run/ - media/ - sbin/ - bin/ - selinux/ - dev/ - srv/ - home/ - sys/ ls (hd0,msdos1)/ contains: -grub/ -boot/ -initrd.img-3.8.0-29-generic -vmlinuz-3.8.0.29-generic -config-3.8 linux (hd0,msdos3)/vmlinuz This returns "error: out of disk" Installing GRUB on Ubuntu partition with grub-install > sudo mount /dev/mapper/sil_agadaccfacbg3 /mnt > sudo grub-install --root-directory=/mnt/ /dev/mapper/sil_agadaccfacbg This finishes with message: > Installation finished. No error reported. After reboot get the message "error: out of disk" and the GRUB rescue prompt. Configuring GRUB with grub-mkconfig Attempting to run grub-mkconfig with different destinations results in the same message: /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?). Remarks: Initially I didn't use a separate /boot partition, but the GRUB install then also failed. Because some mention that a small partition at the beginning of the drive is necessary on old machines, I retried with a /boot partition This is a single boot (no other OS's installed/used)

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >