Search Results

Search found 20409 results on 817 pages for 'url routing'.

Page 320/817 | < Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >

  • Set up Glassfish connection pool to talk to a database on a Ubuntu VPS

    - by Harry Pham
    On my Ubuntu VPS, i have a mysql server running and a Glassfish 3.0.1 Application Server running. And I am having a hard to have my GF successfully ping the database. Here is my GF set up Assume: x.y.z.t is the ip of my VPS Resource Type: javax.sql.ConnectionPoolDataSource User: root DatabaseName: scholar Url: jdbc:mysql://x.y.z.t:3306/scholar URL: jdbc:mysql://x.y.z.t:3306/scholar Password: xxxx PortNumber: 3306 ServerName: x.y.z.t Inside my glassfish3/glassfish/lib, I have my mysql-connector-java-5.1.13-bin.jar Inside the database, table mysql here is the result of the query select User, Host from user; +------------------+-----------+ | User | Host | +------------------+-----------+ | root | 127.0.0.1 | | debian-sys-maint | localhost | | root | localhost | | root | yunaeyes | +------------------+-----------+ Now from my machine, if I try to connect to this db via mysql browser (mysql client software), well I cant. Well from the table above, seem like it only allow localhost to connect to this db. Keep in mind that both my db and my GF are on the same VPS. Please help

    Read the article

  • DNS hijack - prevention tips

    - by user578359
    Hi there, Over the weekend it looks like the DNS was hijacked on two of my domains. My set up is I have the sites registered on 1and1.co.uk, with dns nameservers pointing to Hostgator in the US where the sites are hosted. I also had cloudflare CDN running on the sites (via hostgator cpanel). My question is any ideas as to how this happened, and how I could either monitor it so I know if it occurs again, or strengthen the set up/service to minimise the risk. History: I received a ping from my site monitoring service that the sites were down. When I checked the sites were up so I assumed it was local to the monitoring service I received a ping last night the sites were up When I checked, one site was redirecting to download-manual.com (and checking that URL now, the home page is not the same as the one I saw, so they too may have been hijacked/hacked) The other site URL remained the same but had one of those standard site search pages which bounce you off to either phishing or paid for search sites I notified Hostgator who told me Cloudflare or 1and1 were the issue. I removed cloudflare, and contacted both them and hostgator, and am awaiting a response, but am not holding my breath. Is this common? I've never heard of this or come across this before. It's pretty scary that this can happen so easily. Appreciate any input. **Update: I've now spoken to support at 1and1, Hostgator, and Cloudflare, and each one claims it has nothing to do with them, and must be one of the others. Larry, curly, moe.

    Read the article

  • How to protect UI components using OPSS Resource Permissions

    - by frank.nimphius
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableGrid {mso-style-name:"Table Grid"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-priority:59; mso-style-unhide:no; border:solid black 1.0pt; mso-border-alt:solid black .5pt; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-border-insideh:.5pt solid black; mso-border-insidev:.5pt solid black; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} ADF security protects ADF bound pages, bounded task flows and ADF Business Components entities with framework specific JAAS permissions classes (RegionPermission, TaskFlowPermission and EntityPermission). If used in combination with the ADF security expression language and security checks performed in Java, this protection already provides you with fine grained access control that can also be used to secure UI components like buttons and input text field. For example, the EL shown below disables the user profile panel tabs for unauthenticated users: <af:panelTabbed id="pt1" position="above">   ...   <af:showDetailItem        text="User Profile" id="sdi2"                                       disabled="#{!securityContext.authenticated}">   </af:showDetailItem>   ... </af:panelTabbed> The next example disables a panel tab item if the authenticated user is not granted access to the bounded task flow exposed in a region on this tab: <af:panelTabbed id="pt1" position="above">   ...   <af:showDetailItem text="Employees Overview" id="sdi4"                        disabled="#{!securityContext.taskflowViewable         ['/WEB-INF/EmployeeUpdateFlow.xml#EmployeeUpdateFlow']}">   </af:showDetailItem>   ... </af:panelTabbed> Security expressions like shown above allow developers to check the user permission, authentication and role membership status before showing UI components. Similar, using Java, developers can use code like shown below to verify the user authentication status: ADFContext adfContext = ADFContext.getCurrent(); SecurityContext securityCtx = adfContext.getSecurityContext(); boolean userAuthenticated = securityCtx.isAuthenticated(); Note that the Java code lines use the same security context reference that is used with expression language. But is this all that there is? No ! The goal of ADF Security is to enable all ADF developers to build secure web application with JAAS (Java Authentication and Authorization Service). For this, more fine grained protection can be defined using the ResourcePermission, a generic JAAS permission class owned by the Oracle Platform Security Services (OPSS).  Using the ResourcePermission  class, developers can grant permission to functional parts of an application that are not protected by page or task flow security. For example, an application menu allows creating and canceling product shipments to customers. However, only a specific user group - or application role, which is the better way to use ADF Security - is allowed to cancel a shipment. To enforce this rule, a permission is needed that can be used declaratively on the UI to hide a menu entry and programmatically in Java to check the user permission before the action is performed. Note that multiple lines of defense are what you should implement in your application development. Don't just rely on UI protection through hidden or disabled command options. To create menu protection permission for an ADF Security enable application, you choose Application | Secure | Resource Grants from the Oracle JDeveloper menu. The opened editor shows a visual representation of the jazn-data.xml file that is used at design time to define security policies and user identities for testing. An option in the Resource Grants section is to create a new Resource Type. A list of pre-defined types exists for you to create policy definitions for. Many of these pre-defined types use the ResourcePermission class. To create a custom Resource Type, for example to protect application menu functions, you click the green plus icon next to the Resource Type select list. The Create Resource Type editor that opens allows you to add a name for the resource type, a display name that is shown when granting resource permissions and a description. The ResourcePermission class name is already set. In the menu protection sample, you add the following information: Name: MenuProtection Display Name: Menu Protection Description: Permission to grant menu item permissions OK the dialog to close the resource permission creation. To create a resource policy that can be used to check user permissions at runtime, click the green plus icon in the Resources section of the Resource Grants section. In the Create Resource dialog, provide a name for the menu option you want to protect. To protect the cancel shipment menu option, create a resource with the following settings Resource Type: Menu Protection Name: Cancel Shipment Display Name: Cancel Shipment Description: Grant allows user to cancel customer good shipment   A new resource Cancel Shipmentis added to the Resources panel. Initially the resource is not granted to any user, enterprise or application role. To grant the resource, click the green plus icon in the Granted To section, select the Add Application Role option and choose one or more application roles in the opened dialog. Finally, you click the process action to define the policy. Note that permission can have multiple actions that you can grant individually to users and roles. The cancel shipment permission for example could have another action "view" defined to determine which user should see that this option exist and which users don't. To use the cancel shipment permission, select the disabled property on a command item, like af:commandMenuItem and click the arrow icon on the right. From the context menu, choose the Expression Builder entry. Expand the ADF Bindings | securityContext node and click the userGrantedResource option. Hint: You can expand the Description panel below the EL selection panel to see an example of how the grant should look like. The EL that is created needs to be manually edited to show as #{!securityContext.userGrantedResource[               'resourceName=Cancel Shipment;resourceType=MenuProtection;action=process']} OK the dialog so the permission checking EL is added as a value to the disabled property. Running the application and expanding the Shipment menu shows the Cancel Shipments menu item disabled for all users that don't have the custom menu protection resource permission granted. Note: Following the steps listed above, you create a JAAS permission and declaratively configure it for function security in an ADF application. Do you need to understand JAAS for this? No!  This is one of the benefits that you gain from using the ADF development framework. To implement multi lines of defense for your application, the action performed when clicking the enabled "Cancel Shipments" option should also check if the authenticated user is allowed to use process it. For this, code as shown below can be used in a managed bean public void onCancelShipment(ActionEvent actionEvent) {       SecurityContext securityCtx =       ADFContext.getCurrent().getSecurityContext();   //create instance of ResourcePermission(String type, String name,   //String action)   ResourcePermission resourcePermission =     new ResourcePermission("MenuProtection","Cancel Shipment",                            "process");        boolean userHasPermission =          securityCtx.hasPermission(resourcePermission);   if (userHasPermission){       //execute privileged logic here   } } Note: To learn more abput ADF Security, visit http://download.oracle.com/docs/cd/E17904_01/web.1111/b31974/adding_security.htm#BGBGJEAHNote: A monthly summary of OTN Harvest blog postings can be downloaded from ADF Code Corner. The monthly summary is a PDF document that contains supporting screen shots for some of the postings: http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html

    Read the article

  • Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu

    - by Asian Angel
    Desktop notifications from Chromium-based browsers are an awesome feature, but they do not blend in well at all with the native system theming in Ubuntu. Now you can fix that small problem using the wonderful Chromify-OSD extension created by Marco Ceppi. Once you get the extension installed you can give it a quick test run using the link and information we have listed below. As you can see in the image above the new notification style looks absolutely wonderful. Chromify-OSD (Chrome Web Store) [via OMG! Ubuntu!] You can test the new look of the notifications for yourself using the following webpage. Keep in mind that the extension needs to be installed first before this will work though. Note: Enter the following image URL into the Icon Blank (http://www.rgraph.net/images/logo.png) or the URL for an appropriate image, otherwise the notification may not work properly during your test. Chromify Sample HTML5 Notification Test Page The wallpaper shown in the screenshot above can be downloaded here: anime sport [DesktopNexus] Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker Google Sky Map Turns Your Android Phone into a Digital Telescope Walking Through a Seaside Village Wallpaper

    Read the article

  • lighttpd server-status

    - by krys
    I have enabled lighttpd mod_status as /server-status. When I go to the URL, I get the status page. I am interested in monitoring connections -- most specifically KeepAlive connections to make sure KeepAlive is working correctly. The problem is I only see the full connection info for the /server-status request itself. All other requests do not have the URI or hist columns filled in: X.X.X.X 0/0 0/4673 handle-req 0 test.mydomain.com /server-status (/server-status) This makes it difficult to know which URL was last handled by a particular connection. Is there something special that I need to do to show this information(URI) in /server-status?

    Read the article

  • Spring Roo Database Reverse Engineer with Oracle

    - by kerry
    So you are trying to reverse engineer an Oracle database with roo? Unfortunately, due to licensing restrictions with the Oracle JDBC Drivers, this is a little difficult. There are a few blog posts and forum threads that address the problem but I figured I would post what worked for me here. First, you need to download the appropriate Oracle Drivers from Oracle. The required login, stringent password requirements, nosy registration form, and general system instability made this a pretty painful step for me. I’d also like to say that companies that have password requirements that don’t allow symbols (or any other non-standard requirement) have a special place in my heart. Having to recover my password every time I go to your site virtually guarantees I will only go there when I absolutely have to (not often). Anyways, once you have it downloaded you need to install is with maven: mvn install:install-file -Dfile=~/Downloads/ojdbc6.jar -DgroupId=com.oracle -DartifactId=ojdbc6 -Dversion=11.2.0.3 -Dpackaging=jar -DgeneratePom=true Here comes the fun part. You need to create an osgi wrapper for the driver to install it in roo. Otherwise, roo cannot see the driver. Create a new folder and put the contents of the oracle roo addon pom gist I created. Now build it with maven. You may want to change some of the artifact ids and dependencies for your particular situation. mvn package No open a roo shell and execute the following command: osgi install --url file:///Users/me/my-osgi-project/target/the-jar-it-built.jar Now run (in roo): jpa setup --provider HIBERNATE --database ORACLE dependency remove --groupId com.oracle --artifactId ojdbc14 --version 10.2.0.2 dependency add --groupId com.oracle --artifactId ojdbc6 --version 11.2.0.3 database properties set --key database.driverClassName --value oracle.jdbc.OracleDriver database properties set --key database.url --value jdbc:oracle:thin:@%YOUR_CONNECTION_INFO% database properties set --key database.username --value %YOUR_USERNAME% database properties set --key database.password --value %YOUR_PASSWORD% database reverse engineer --schema %YOUR_SCHEMA% --package ~.domain If you have any package loading exceptions when running the reverse engineer command you can uninstall the osgi bundle, set the package to optional in the osgi pom in the IncludedPackages tag (javax.some.package.*;resolution:=optional) rebuild, then reinstall in roo.

    Read the article

  • Advanced reporting in Oracle Service Bus

    - by [email protected]
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 21 false false false FR-BE X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tableau Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Reporting in OSB is useful, it allows you to audit message going through OSB. The service bus console allows you to view the content that you reported. To report data you simply use the Report action in your proxy. The action itself is rather straightforward. You specify the content to report ($body for example), an optional key for easier search (for example the id of the record) and that's it. Sometimes though, what you want to is a bit more complicated. I recently had a case where the key was built from the message type (XML) and the id of the message. Seems quite simple but the id could be any element anywhere in the message depending on its type. This could be handled by 'if' statement but adding new cases would mean changing the proxy service and if you have lots of message types this can get boring so I wanted the solution to be as dynamic as possible (read "just change a configuration file and that's it"). The following entry details how you can make this dynamic in your proxy by using XQuery/XSLT.   First step the XQuery We're going to use an XQuery to make the mapping between the XML message type and the location of the identifier in it. We assume here that the message type is the first node of the input XML and use a rather simple Xpath to find the identifier.  The XQuery looks like this for two messages : <reportmapping>                 <row>                                <logical>messageType1</logical>                                <type>MT1</type>                                <reportingreferencelocation>//customID</reportingreferencelocation>                 </row>                 <row>                                <logical>messageType2</logical>                                <type>MT2</type>                                <reportingreferencelocation>//theOtherIDLocation</reportingreferencelocation>                 </row>   </reportmapping>   Second step the XSLT To get the identifier value of the dynamic path, we're going to use an XSLT transformation. This XSLT takes an XML parameter as input which contains our xpath (coming from the previous XQuery). The XSLT looks like this : <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xalan="http://xml.apache.org/xalan">               <xsl:param name="PathToNode"/>               <xsl:template match="/">                             <IDVALUE>                                           <xsl:value-of select="xalan:evaluate($PathToNode/reportingreferencelocation)"/>                             </ IDVALUE >               </xsl:template> </xsl:stylesheet> (note the use of a xalan function here. Xalan is the XSLT processor used in weblogic server)   Last step, the proxy service We're now going to wire everything in the proxy service. First we assign the XQuery to a variable. We then get the entry in the XQuery corresponding to the record we're treating. We're then extracting the id of the message using the XSLT transformation Final assign is to built the final variable that will be used as the reporting key. The report action is then called with this variable. Everything is setup. We're now ready to test.   Testing the solution Using the test console, we're sending our first XML ... <messageType1>                 <sender>test console 1</sender>                 <customID>ID12345</customID >                 <content>                                 <field1>value of field 1</field1>                 </content> </messageType1>   ... and a second one of another supported type <messageType2>                 <header>                                 <theOtherIDLocation >ID67890</theOtherIDLocation >                 </header> <body>                                <data>Test data</data>                 </body> </messageType2>   Reporting result is :  Conclusion Report is done as expected. Now if a new message type must be supported we only have to modify the XQuery and nothing at the proxy service level.   Sample project attached to this entry.sbconfig-dynamicReport.jar  

    Read the article

  • Single database, multiple system dependency

    - by davenewza
    Consider an environment where we have a single, core database, with many separate systems using this one database. This leads to all of these systems have a common dependency, which ultimately introduces coupling between them. This means that we cannot always evolve systems independently of each other. Structural changes to the database (even if only intended for one, particular system), requires a full sweep test of ALL systems, and may require that other systems be 'patched' and subsequently released. This is especially tricky when you want to have separate teams working on different projects. What is a good 'pattern' to help in avoiding such coupling? I would imagine that a database should be exclusively depended on by one system. If other systems require data for whatever reason, they should request such from an API service of some kind. A drawback of this approach which comes to mind is performance: routing data between high-throughput systems through service calls is much slower than through a database connection.

    Read the article

  • apache 2.4 php-fpm proxypassmatch for pretty urls

    - by tubaguy50035
    I have a URL like http://newsymfony.dev/app_dev.php/_profiler/5080653d965eb. I would like to send this script to PHP-FPM. I currently have this as my vhost: VirtualHost *:80> ServerName newsymfony.dev DocumentRoot /home/COMPANY/nwalke/www/newsymfony.dev/web/ ErrorLog /var/log/apache2/error_log LogLevel info CustomLog /var/log/apache2/access_log combined <Directory /home/COMPANY/nwalke/www/newsymfony.dev/web/> AllowOverride All Require all granted DirectoryIndex app.php </Directory> ProxyPassMatch ^/(.*\.php)$ fcgi://127.0.0.1:9003/home/COMPANY/nwalke/www/newsymfony.dev/web/$1 </VirtualHost> If I browse to app_dev.php it works just fine. But if I do app_dev.php/_profiler/5080653d965eb I get a 404 and the request never gets sent to FPM. How can I alter my ProxyPassMatch to pass anything with .php in the URL? I'm trying to do this with Symfony, but I'm pretty sure it applies to everything.

    Read the article

  • Ridiculously easy AJAX with ASP.NET MVC and jQuery

    - by eddraper
    After deciding I wanted to dive full-on into the world of ASP.NET MVC 2, I  began doing some research into what would be the best way to support some of my required AJAX functionality on this platform.  The result of these efforts was a barrage of options – many of which required completely different JScript infrastructure than what I planned to go forward with.  As I’ve been delighted with jQuery so far, I began tossing out all approaches that didn’t natively leverage it… Thus, I planned to resist the temptation to take anymore <script> dependencies whatsoever, unless I thoroughly proved that jQuery could NOT do what I planned to do.   Here’s some code I wish I would’ve found early in my research.  This would’ve saved me quite a bit of time and search engine bandwidth. ;-)   <script type="text/javascript">     $(document).ready(function () {         $('#div_name_here').load('<%=Url.Action("ACTION_NAME_HERE","CONTROLLER_NAME_HERE")%>');         $('#id_of_link_I_want_trigger_the_ajax_call')       .bind('click', function (event) {           $('#div_name_where_I_want_to_have_the_ajax_response_loaded_here').load('<%=Url.Action("ACTION_HERE","CONTROLLER_HERE", )%>');       })     }) </script>

    Read the article

  • Handling SEO for Infinite pages that cause external slow API calls

    - by Noam
    I have an 'infinite' amount of pages in my site which rely on an external API. Generating each page takes time (1 minute). Links in the site point to such pages, and when a users clicks them they are generated and he waits. Considering I cannot pre-create them all, I am trying to figure out the best SEO approach to handle these pages. Options: Create really simple pages for the web spiders and only real users will fetch the data and generate the page. A little bit 'afraid' google will see this as low quality content, which might also feel duplicated. Put them under a directory in my site (e.g. /non-generated/) and put a disallow in robots.txt. Problem here is I don't want users to have to deal with a different URL when wanting to share this page or make sense of it. Thought about maybe redirecting real users from this URL back to the regular hierarchy and that way 'fooling' google not to get to them. Again not sure he will like me for that. Letting him crawl these pages. Main problem is I can't control to rate of the API calls and also my site seems slower than it should from a spider's perspective (if he only crawled the generated pages, he'd think it's much faster). Which approach would you suggest?

    Read the article

  • Is there a way to browse a media server (MediaTomb) with a media player (VLC)?

    - by twig
    I currently have an EEE PC set up with LinuxMint as my media server using MediaTomb. I use VLC as my media player on another Windows computer to watch videos off the media server. It works fine, but the current process is: Open up browser and navigate to folder Find the file I want to play and copy URL Paste URL into VLC and watch. This is fine for me on the PC, but it is a little troublesome for my parents to grasp (or for me to use on the phone). Ideally I'd like to: Open up VLC Browse to the file (using VLC) Click/select to play If there is any solution which is similar to this, please let me know. I'm willing to change the software on both server and client to accommodate (although it somewhat depends on which formats are supported on the server) Side note: I've tried searching online for this but I find a lot of jargon such as "media server/centre", media streaming, DLNA, UPNP and feel that some people are either using them interchangeably or incorrectly.

    Read the article

  • B2B communication using IBM MQ

    - by Dheeraj Kumar M
    Oracle B2B 11g, provides the out-of-the box ability to connect to IBM MQ to exchange the message. This is support is provided via JMS offering of Oracle B2B. This is an addition to the stack of existing communication capabilities of B2B with trading partners. There are 2 ways of connecting to IBM MQ using B2B 1. Credential based connectivity 2. .bindings based connectivity As a pre-requisite to connect to IBM MQ, it is required to provide the following libraries in classpath: a. com.ibm.mqjms.jar b. dhbcore.jar c. com.ibm.mq.jar d. com.ibm.mq.jmqi.jar e. mqcontext.jar f. com.ibm.mq.pcf.jar g. com.ibm.mq.commonservices.jar h. com.ibm.mq.headers.jar i. fscontext.jar j. jms.jar Add the above jars into domain library directory and the directory usually located at $DOMAIN_DIR/lib. The jars located in this($DOMAIN_DIR/lib) directory will be picked up and added dynamically to the end of the server classpath at server startup. For eg. /user_projects/domains//lib/ Alternatively the above jar’s can also be added as part of the setDomainEnv.sh Credential based connectivity : Outbound: : Configure the trading partner delivery channel for using "Generic JMS" protocol Inbound: : Configure the internal delivery channel for using "Generic JMS" protocol with the following details: Parameter NameDescription Destination NameMQ Queue Name Connection FactoryMQ Queue Manager Name Destination Providerjava.naming.factory.initial=com.ibm.mq.jms.context.WMQInitialContextFactory;java.naming.provider.url=<host>:<QM Listen port>/<MQ Channel Name>; User NameMQ User Name passwordMQ password .bindings based connectivity As a pre-requisite, get/generate the .bindings file in MQServer. This can be done by MQ Administrator Set the following values in the respective delivery channel for outbound / inbound Parameter NameDescription Destination NameMQ Queue Name Connection FactoryMQ Queue Manager Name Destination Providerjava.naming.factory.initial=com.ibm.mq.jms.context.WMQInitialContextFactory;java.naming.provider.url=file:///<location of .bindings file>;

    Read the article

  • nginx dynamic servername with regular expression doesn't work for co.uk

    - by redn0x
    I'm trying to setup a nginx server which dynamically loads content from a folder for a domain. To do this I'm using regular expressions in the server name like so: server_name ((?<subdomain>.+)\.)?(?<domain>.+)\.(?<tld>.*); This will create a 3 variables for nginx to use later on, for example when using the following url: test.foo.example.com this will evaluate to: $subdomain = test.foo $domain = example $tld = com The problem arises when the co.uk top-level domain is used. In this case when using the url test.foo.example.co.ukit will evaluate to: $subdomain = test.foo.cedira $domain = co $tld = uk How can I edit the regular expression so that it will also work for co.uk?

    Read the article

  • Getting Internal Name of a Share Point List Fields

    - by Gino Abraham
    Over the last 2 weeks i was developing a tool to migrate Lotus notes data base to Share point. The mapping between Lotus notes schema and share point list schema was done manually in an xml file for out tool. To map the columns we wanted internal names of each field. There are quite a few ways to achieve this, have explained few below. If you want internal names for one or 2 columns you can do so by navigating to the list setting and clicking on the column name. Once you are in column's details, you can check the query string of the page. The last item in the query string would be field's internal. Replace all "%5f" with '_' will give you the field internal name. In my case there were more than 80 columns. I used power shell to get the list of columns with details. Open windows Powershell and paste the following script after modifying the url and list name. [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") $site = new-object Microsoft.SharePoint.SPSite(http://yousitecolurl) $web = $site.OpenWeb() $list = $web.Lists["yourlist name"] $list.Fields | Format-Table Title, InternalName, TypeAsString I also found a tool in Codeplex.com which can generate a wrapper class for a list. The wrapper class will give you the guid and internal name for all fields in the list.  You can download the tool from http://imtech.codeplex.com/ Just enter the url in the text box and hit open. All the site content will be listed at the left hand side, expand the list, right click and select generate wrapper class.

    Read the article

  • How to use private DNS to map private IP with "non registred" domain name

    - by PapelPincel
    I would like to use a private DNS (Route53 in our case) in order to map hosts to EC2 instance private IP addresse. The hosted zone we are using for testing is not declared in any registrar (company-test.com.). There are different servers (Nagios, Puppet, ActiveMQ ...) all hosted in ec2, that means their IP can change over time (restart, new instance launch...). That would be great if I can use DNS instead of clients' /etc/hosts for mapping private IP/internal domain name... The ActiveMQ server url is activemq.company-test.com and it maps to (A record) private IP address of the AMQ server. This url is only reachable by other ec2 owned by the same aws account. My question is how to configure ec2 instances so they could reach the ActiveMQ server WITHOUT having to buy a new domain company-test.com ?

    Read the article

  • Should all new web projects build their backend based on xml/json result sets?

    - by Blankman
    If you were building a new Saas project, would it make sense to start with all of the backend services returning xml/json? Because these days you need to build for both the web and mobile devices, and having a backend that is build from the start to return xml and json, you are ready to go mobile (all services have the business logic, so you won't be repeating anything). Now the web would be MVC, so the controller would just be routing the request to your service backend, and converting the json or xml to html. The obviousl downside is that you have to build a backend, and then another web project that calls your backend. But this also goes to you favor as it forces you to seperate your concerns, and not leak business logic in your controller/view layer. Thoughts?

    Read the article

  • Problems getting Squirrelmail and passenger working on apache

    - by Kenneth
    I'm trying to have a setup where I want to run a squirrelmail and Passenger on the same apache server, having a url point to squirrelmail and everything else handled by passenger. I've gotten so far that both squirrelmail and passenger will run fine by themselves but when passenger is running it handles all urls. So far I've tried using Alias and Redirect to point a webmail/ url to squirrelmails directory but that does not work. Here is my httpd.conf file: <VirtualHost *:80> ServerName not.my.real.server.name DocumentRoot /var/www/sinatra/public # Does not work: #Redirect webmail/ /usr/share/squirrelmail/ #<Directory /usr/share/squirrelmail> # Require all granted #</Directory> <Directory /var/www/sinatra/public> Order allow,deny Allow from all </Directory> </VirtualHost>

    Read the article

  • How to redirect (or Alias) jump page with Apache

    - by Meltemi
    I'm not an Apache expert but need to make a small change to a web server. We are introducing a "jump page" URL that is different from a primary URL (for tracking reasons). /productA/index.html /productA/jump_index.html Basically i want to log that jump_index.html was requested and then return index.html. I don't want the client to wait 8 seconds or so for a redirect. How should we be handling this? Simply symlink (or alias) the file in the filesystem? Use mod_alias Alias Match (if so how exactly)? something better still?

    Read the article

  • 404 Not Found for a PL script that exists!

    - by Abs
    Hello all, I make a GET request to a CGI script and I get a 404 error. However, I am 100% sure that script is present and it has permissions: -rwxr-xr-x 1 apache apache 6520 Sep 7 03:01 uu_ini_status_audios.pl The request URL is: http://mysite.com/cgi-bin/uu_ini_status_audios.pl?tmp_sid=893facacc5dc392ad0f4c91e6a9e8d40&rnd_id=0.12266222834382812 The error I get: The requested URL /cgi-bin/uu_ini_status_audios.pl was not found on this server. This use to work for me before, but I think it stopped working after I restarted apache so maybe it means its a configuration I changed?? I checked the error logs for apache and php and nothing useful was found to help me with my problem! I appreciate any help on this!

    Read the article

  • Bash Script help required

    - by Sunil J
    I am trying to get this bash script that i found on a forum to work. Copied it to text editor. Saved it as script.sh chmod 700 and tried to run it. rootdir="/usr/share/malware" day=`date +%Y%m%d` url=`echo "wget -qO - http://lists.clean-mx.com/pipermail/viruswatch/$day/thread.html |\ awk '/\[Virus/'|tail -n 1|sed 's:\": :g' |\ awk '{print \"http://lists.clean-mx.com/pipermail/viruswatch/$day/\"$3}'"|sh` filename=`wget -qO - http://lists.clean-mx.com/pipermail/viruswatch/$day/thread.html |\ awk '/\[Virus/'|tail -n 1|sed 's:": :g' |awk '{print $3}'` links -dump $url$filename | awk '/Up/'|grep "TR\|exe" | awk '{print $2,$8,$10,$11,$12"\n"}' > $rootdir/>$filename dirname=`wget -qO - http://lists.clean-mx.com/pipermail/viruswatch/$day/thread.html |\ awk '/\[Virus/'|tail -n 1|sed 's:": :g' |awk '{print $3}'|sed 's:.html::g'` rm -rf $rootdir/$dirname mkdir $rootdir/$dirname cd $rootdir grep "exe$" $filename |awk '{print "wget \""$5"\""}' | sh ls *.exe | xargs md5 >> checksums mv *.exe $dirname rm -r $rootdir/*exe* mv checksums $rootdir/$dirname mv $filename $rootdir/$dirname I get the following message.. script.sh: line 11: /usr/share/malware/: Is a directory script.sh: line 11: links: command not found

    Read the article

  • Is this possible?

    - by PythonNewbie2
    Hello, I'm exploring some technologies and JSP with JSF 2.0 and Primefaces seems really cool. I'm new to all of these, but I'm a fast learner. I wondering if I can create the web app I want withh JSP/JSF/Primefaces or should I be looking to different technologies? If I should, which ones do you recommend? Here's a basic description of the app: Users log in with their username and password (maybe I can somehow incorporate google OPENID)? With a really nice UI, they will be presented a large list of questions specific to a certain category, for example, JSP. When they click on any of these questions, a little input opens up below it to allow the user to put in a link. If the link they enter has the same question on that webpage the URL points to, they will be awarded one point. This question then disappears and gets added to a different page that has a list of all correctly linked questions. On the right side of the screen, there will be a leaderboard with the usernames of the people with the top ten points. Is this possible with JSP/JSF/Primefaces, or should I be looking elsewhere for a different web technology? The idea is relatively simple - to be able to compile links to external websites for specific questions. I know I can build the UI easily with Primefaces. What I'm not sure is if JSP/JSF gives the ability to parse HTML at a certain URL to see if it contains words. I can do this with python easily by using urllib. Any help would be appreciated!!! What would be more helpful than a "Yes" or "No" answer would be links to where I can see sample code of external HTML parsing. Your input is truly appreciated! Thanks!

    Read the article

  • Do any CDN services offer multiple urls (or aliases) for your files?

    - by Jakobud
    Lets say a company has multiple commercial web properties that happen to use a lot of the same images on each site. For SEO reasons, the sites must not appear to be related to eachother in any way. This means that the sites can't all link to the same image, even though they all use the same one. Therefore, an image is uploaded to each site and served from each site separately. In order to improve maintainability and latency, lets say the company wanted to use a CDN service. What I'm wondering is, if you upload a file, like an image or something, to a CDN, is there basically one single URL that you access that image at? Or do some (or all) CDN services offer alias URLs so that you can access the same resource from multiple URLs? Example of undesirable situation: Both sites link to the same file URL Site ABC links to <img src="http://123.cdnservice.com/some-path/myimage.jpg"/> Site XYZ links to <img src="http://123.cdnservice.com/some-path/myimage.jpg"/> Example of DESIRABLE situation: Both sites link to the same file via different URLs Site ABC links to <img src="http://123.cdnservice.com/some-path/myimage.jpg"/> Site XYZ links to <img src="http://123.cdnservice.com/some-alias-path/myimage.jpg"/> So in the end, there is only one single file, myimage.jpg on the CDN server, but it is accessible from multiple URLs. Is this possible with CDN services? I know this would make browsers cache the same image twice, but at least it would be better for maintainability. Only one file would ever have to be uploaded.

    Read the article

  • Issue with www to non www redirect

    - by bob
    Hello, I am on slicehost and I followed the articles that they gave for DNS redirection and the www to non www url redirection does work. However, what if I want a www.domain.com to be the default domain. Would I put www.domain.com. as my DNS record name or would I keep domain.com. as my DNS record and then do something else. Basically, what happens is if someone goes to the url www.domain.com/directory/something.html they will be redirected to domain.com and not domain.com/directory/something.html I would like the second thing to happen, not just go to domain.com and call it a day. I am running nginx and am confounded on how to solve this issue. I'm not sure whether its an nginx issue or a dns issue. Any help would be greatly appreciated!

    Read the article

  • Powershell: Install-dotNET4 function

    - by marc dekeyser
    This function will download and install ,NET 4.0. It uses the Get-Framework-Versions function to determine if the installation is necessary or not. Internet Connectivity will be required as the script auto downloads the setup file (and sleeps for 360 seconds... I had a function in there to monitor for install completion at first, turns out the setup file spawns so many childprocesses the function just got confused and locked up -_-)Alternatively you could drop the installation file in the folder specified on the $folderPath variable too. That will skip the download and use the file. This function easily adapts in to other versions f.e. I use it for Powershell 3 installs as well!Function install-dotNet4 () {    if(($InstalledDotNET -eq "4.0") -or ($InstalledDotNET -eq "4.0c")){        write-host ".NET 4.0 Framework is already installed" -foregroundcolor Green    } else{            #set a var for the folder you are looking for        $folderPath = 'C:\Temp'        #Check if folder exists, if not, create it        if (Test-Path $folderpath){            Write-Host "The folder $folderPath exists." -ForeGroundColor Green        } else{            Write-Host "The folder $folderPath does not exist, creating..." -NoNewline -ForegroundColor Red            New-Item $folderpath -type directory | Out-Null            Write-Host " - done!" -ForegroundColor Green        }        # Check if file exists, if not, download it        $file = $folderPath+"\dotNetFx40_Full_x86_x64.exe"        if (Test-Path $file){            write-host "The file $file exists." -ForeGroundColor Green        } else {            #Download Microsoft .Net 4.0 Framework            Write-Host "Downloading Microsoft .Net 4.0 Framework..." -nonewline -ForeGroundColor DarkYellow            $clnt = New-Object System.Net.WebClient            $url = "http://download.microsoft.com/download/9/5/A/95A9616B-7A37-4AF6-BC36-D6EA96C8DAAE/dotNetFx40_Full_x86_x64.exe"            $clnt.DownloadFile($url,$file)            Write-Host " - done!" -ForegroundColor Green        }        #Install Microsoft .Net Framework        Write-Host "Installing Microsoft .Net Framework..." -nonewline -ForegroundColor DarkYellow        $dotNET4 = $folderPath+"\dotNetFx40_Full_x86_x64.exe /quiet /norestart"        Invoke-Expression $dotNET4        write-host " - done!" -ForegroundColor Green        start-sleep -seconds 360    }}

    Read the article

< Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >