Search Results

Search found 18728 results on 750 pages for 'setup deployment'.

Page 181/750 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • postfix relaying all mail through office365 problems

    - by amrith
    This is a rather long question with a long list of things tried and travails so please bear with me. The summary is this. I am able to relay email from ubuntu through office365 using postfix; the configuration works. It only works as one of the users; more specifically the user who authenticates against office365 is the only valid "from" More details follow. I have a machine in Amazon's cloud on which I run a bunch of jobs and would like to have statuses mailed over to me. I use office365 at work so I want to relay mail through office365. I'm most familiar with postfix so I used that as the MTA. Configuration is ubuntu 12.04LTS; I've installed postfix and mail-utils. For this example, let me say my company is "company.com" and the machine in question (through an elastic IP and a DNS entry) is called "plaything.company.com". hostname is set to "plaything.company.com", so is /etc/mailname On plaything, I have the following users registered alpha, bravo, and charlie. I have the following configuration files. alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all inet_protocols = ipv4 mailbox_size_limit = 0 mydestination = plaything.company.com, localhost.company.com, , localhost myhostname = plaything.company.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = [smtp.office365.com]:587 sender_canonical_maps = hash:/etc/postfix/sender_canonical smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes As the machine is called plaything.company.com I went through the exercise of registering all the appropriate DNS entries to make office365 recognize that I owned plaything.company.com and allowed me to create a user called [email protected] in office365. In office365, I setup [email protected] as having another email address of [email protected]. Then, I made the following sender_canonical [email protected] [email protected] I created a sasl_passwd file that reads: smtp.office365.com [email protected]:123456password123456 let's just say that the password for [email protected] is 1234...456 With all this setup, login as alpha and mail [email protected] Cc: Subject: test test and the whole thing works wonderfully. email gets sent off by postfix, TLS works like a champ, authenticates as daemon@... and [email protected] in Office365 gets an email message. The issue comes up when logged in as bravo to the machine. sender is [email protected] and office365 says: status=bounced (host smtp.office365.com[132.245.12.25] said: 550 5.7.1 Client does not have permissions to send as this sender (in reply to end of DATA command)) this is because I'm trying to send mail as bravo@... and authenticating with office365 as daemon@.... The reason it works with alpha@... is because in office365, I setup [email protected] as having another email address of [email protected]. In Postfix Relay to Office365, Miles Erickson answers the question thusly: Don't send mail to Office365 as a user from your Office365-hosted e-mail domain. Use a subdomain instead, e.g. [email protected] instead of [email protected]. It wouldn't hurt to set up an SPF record for services.mydomain.com or whatever you decide to use. Don't authenticate against mail.messaging.microsoft.com as an Office365 user. Just connect on port 25 and deliver the mail to your domain as any foreign SMTP agent would do. OK, I've done #1, I have those records on DNS but for the most part they are not relevant once Office365 recognizes that I own the domain. Here are those records: CNAME records: - msoid.plaything.company.com - autodiscover.plaything.company.com MX record: - plaything.company.com (plaything-company-com.mail.protection.outlook.com) TXT record: - plaything.company.com (v=spf1 include:spf.protection.outlook.com -all) I've tried #2 but no matter what I do, office365 just blows away the connection with "not authenticated". I can try even a simple telnet to port 25 and attempt to send and it doesn't work. 250 BY2PR01CA007.outlook.office365.com Hello [54.221.245.236] 530 5.7.1 Client was not authenticated Connection closed by foreign host. Is there someone out there who has this kind of a configuration working where multiple users on a linux machine are able to relay mail using postfix through office365? There has to be someone out there doing this who can tell me what is wrong with my setup ...

    Read the article

  • Installing RubyGems 1.9.1

    - by ell
    I have successfully installed ruby1.9.1 but after downloading the .tgz archive offered here and doing sudo ruby1.9.1 setup.rb I get this: /home/elliot/Downloads/rubygems-1.4.1 (2)/lib/rubygems/source_index.rb:62:in `installed_spec_directories': undefined method `path' for Gem:Module (NoMethodError) from /home/elliot/Downloads/rubygems-1.4.1 (2)/lib/rubygems/source_index.rb:52:in `from_installed_gems' from /home/elliot/Downloads/rubygems-1.4.1 (2)/lib/rubygems.rb:914:in `source_index' from /home/elliot/Downloads/rubygems-1.4.1 (2)/lib/rubygems/gem_path_searcher.rb:98:in `init_gemspecs' from /home/elliot/Downloads/rubygems-1.4.1 (2)/lib/rubygems/gem_path_searcher.rb:13:in `initialize' from /home/elliot/Downloads/rubygems-1.4.1 (2)/lib/rubygems.rb:873:in `new' from /home/elliot/Downloads/rubygems-1.4.1 (2)/lib/rubygems.rb:873:in `searcher' from /home/elliot/Downloads/rubygems-1.4.1 (2)/lib/rubygems.rb:495:in `find_files' from /home/elliot/Downloads/rubygems-1.4.1 (2)/lib/rubygems.rb:1034:in `load_plugins' from /home/elliot/Downloads/rubygems-1.4.1 (2)/lib/rubygems/gem_runner.rb:84:in `<top (required)>' from <internal:lib/rubygems/custom_require>:29:in `require' from <internal:lib/rubygems/custom_require>:29:in `require' from setup.rb:25:in `<main>' Why is installing RubyGems with Ruby1.9.1 so painful? How can I install it correctly? Thanks in advance, ell.

    Read the article

  • How yo set up a Sony Vaio PCG-4121EM 3G modem?

    - by Ivan
    We've bought a Sony Vaio PCG-4121EM supposed to have a built-in 3G modem. It has a SIM-card slot at its bottom. We've inserted a newly-bought SIM-card but nothing happened and the modem is still not visible among the computer devices (neither in Windows device manager nor in "Modems" Control Panel applet). How to turn it on? I would usually seek to turn a built-in device in the BIOS setup, but there seem to be no BIOS setup on this Vaio - Windows 7 splash screen appears immediately as I turn the computer on.

    Read the article

  • OWB 11gR2 - Find and Search Metadata in Designer

    - by David Allan
    Here are some tools and techniques for finding objects, specifically in the design repository. There are ways of navigating and collating objects that are useful for day to day development and build-time usage - this includes features out of the box and utilities constructed on top. There are a variety of techniques to navigate and find objects in the repository, the first 3 are out of the box, the 4th is an expert utility. Navigating by the tree, grouping by project and module - ok if you are aware of the exact module/folder that objects reside in. The structure panel is a useful way of finding parts of an object, especially when large rather than using the canvas. In large scale projects it helps to have accelerators (either find or collections below). Advanced find to search by name - 11gR2 included a find capability specifically for large scale projects. There were improvements in both the tree search and the object editors (including highlighting in mapping for example). So you can now do regular expression based search and quickly navigate to objects within a repository. Collections - logically organize your objects into virtual folders by shortcutting the actual objects. This is useful for a range of things since all the OWB services operate on collections too (export/import, validation, deployment). See the post here for new collection functionality in 11gR2. Reports for searching by type, updated on, updated by etc. Useful for activities such as periodic incremental actions (deploy all mappings changed in the past week). The report style view is useful since I can quickly see who changed what and when. You can see all the audit details for objects within each objects property inspector, but its useful to just get all objects changed today or example, all objects changed since my last build etc. This utility combines both UI extensions via experts and the public views on the repository. In the figure to the right you see the contextual option 'Object Search' which invokes the utility, you can see I have quite a number of modules within my project. Figure out all the potential objects which have been changed is not simple. The utility is an expert which provides this kind of search capability. The utility provides a report of the objects in the design repository which satisfy some filter criteria. The type of criteria includes; objects updated in the last n days optionally filter the objects updated by user filter the user by project and by type (table/mappings etc.) The search dialog appears with these options, you can multi-select the object types, so for example you can select TABLE and MAPPING. Its also possible to search across projects if need be. If you have multiple users using the repository you can define the OWB user name in the 'Updated by' property to restrict the report to just that user also. Finally there is a search name that will be used for some of the options such as building a collection - this name is used for the collection to be built. In the example I have done, I've just searched my project for all process flows and mappings that users have updated in the last 7 days. The results of the query are returned in a table containing the object names, types, full path and audit details. The columns are sort-able, you can sort the results by name, type, path etc. One of the cool things here, is that you can then perform operations on these objects - such as edit them, export single selection or entire results to MDL, create a collection from the results (now you have a saved set of references in the repository, you could do deploy/export etc.), create a deployment script from the results...or even add in your own ideas! You see from this that you can do bulk operations on sets of objects based on search results. So for example selecting the 'Build Collection' option creates a collection with all of the objects from my search, you can subsequently deploy/generate/maintain this collection of objects. Under the hood of the expert if just basic OMB commands from the product and the use of the public views on the design repository. You can see how easy it is to build up macro-like capabilities that will help you do day-to-day as well as build like tasks on sets of objects.

    Read the article

  • RRAS VPN Server on Windows 2008 Behind NAT

    - by Chris
    Ok, so I have kind of a funky setup, let me see if I can describe it. I have a single VMware host with a public IP address 74.xx.xx.x Inside that host, I have 3 VM's Web Server - 1 NIC - 192.168.199.20 SQL Server - 1 NIC - 192.168.199.30 RRAS/VPN Server - 2 NICs 192.168.199.40 & 192.168.199.45 Due to Limitations of my ISP, all of the VM's are connected to the host VIA NAT. I have NAT setup for the webserver so all incoming requests on 74.xx.xx.x via port 80 route to 192.168.199.20. This works fine. Now I want to set up a Windows 2008 VPN server inside this NAT network and forward the correct traffic to it. My questions are as follows? What are the TCP/UDP ports that i have to forward? What special configuration is needed on the server and clients since this is behind a NAT Any other advice would be wonderful.

    Read the article

  • SharePoint 2010 Hosting :: SharePoint 2010 Custom Web Template

    - by mbridge
    SharePoint 2010 offers some changes and additions to the SharePoint 2007 approach. Site definitions and publishing providers remain largely the same, but site templates created from the SharePoint UI or SharePoint Designer are now saved to a .WSP file, the same solution deployment packaging file format used for deploying custom SharePoint solutions. Site Templates saved to a .WSP solution file can be imported into Visual Studio for additional customization. Introducing the WebTemplate Feature Element The WebTemplate element, introduced in SharePoint 2010, allows site templates to be defined and deployed as a Feature as part of a solution package. A WebTemplate element feature can be used to deploy site templates in either a Farm or Sandbox solution - without modification. If deployed as a Farm feature and solution, site templates will appear in the site collection provisioning page in Central Administration and can be used to provision new site collections, or within a Site Collection to create sub-sites. If deployed as a Site feature and Sandbox solution, site templates will appear within the site collection to support creating a root site or sub-sites. Creating a new WebTemplate Feature in Visual Studio 2010 In addition to supporting the ability to save and import Site Templates created from the SharePoint UI into Visual Studio for customization, it can also be used to create new site templates from scratch. In the following sample we will walk through how to create a new WebTemplate solution based on  a customized version of the out-of-box Blank Site. 1. Create a new Empty SharePoint Project in Visual Studio 2010. 2. Add a new Empty Element to the project. we like to create folders for each type of element in our solution, so in our sample, we have created a Web Templates folder, and then added the BLANKENT element. NOTE: The Elements folder MUST share the same name as the WebTemplate name property. 3. Open the empty Elements.xml and add the <WebTemplate /> element block. 4. Copy the default.aspx and ONET.XML files from the STS site definition location at 14\TEMPLATES\Site Templates\STS. We will customize the ONET.XML in the next section. Open the properties for each file and set the Deployment Type to ElementFile. This ensures the files are deployed with the Element when included in a Feature. 5. By default a new feature is added to the solution for you automatically when a new element is added to the solution. Rename and edit the feature as appropriate. Select Farm for the scope to deploy the WebTemplate to the entire farm, or Site for a sandboxed solution. Customize the ONET.XML At this point, you have a working WebTemplate solution that will deploy the identical site to the out-of-box Blank Site, however the ONET.XML supporting the STS site definition contains 3 configurations – essentially 3 separate site templates and can be simplified before customizing. In the following sample, we have trimmed the ONET.XML to the essentials for a single Site Template, and added references to the <SiteFeatures /> and <WebFeatures /> elements to include the SharePoint Standard and Enterprise features. We have left the top-level navigation bar, and the default page module intact, but removed all other extraneous markup.

    Read the article

  • They may block off Howard Street—but Oracle OpenWorld is a two-way street.

    - by Oracle Accelerate for Midsize Companies
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 by Jim Lein, Sr. Director, Oracle Accelerate for Midsize Companies “Engineered to Inform and Inspire”—that’s the theme of Oracle OpenWorld 2012. In early October, tens of thousands of attendees will descend on the streets of San Francisco because they share one thing in common: the desire to learn more about Oracle. You might think that’s the way we, Oracle employees, look at this event—as just another opportunity for attendees to learn about what we do. But it’s really a two way street. Every year I’m amazed by how informed and inspired I am by our customers and their companies. Midsize companies buy Oracle to grow. As part of the Oracle Accelerate for Midsize Companies team I get to talk with our partners and business leaders at growing companies almost every day, usually via phone. Oracle OpenWorld presents the perfect opportunity to meet some of them in person, in an informal setting, and in one of the most beautiful cities in the world. The stories our customers tell me about their businesses provide vivid examples of how they have overcome the challenges of managing increasingly complex global operations and growing during uncertain economic conditions. It’s no secret that my favorite session at Oracle OpenWorld (besides Larry Ellison’s keynotes and the Customer Appreciation Event, of course) is the Oracle Accelerate Customer Panel. This year we’re featuring executives from three companies who deployed Oracle ERP rapidly to support their company’s growth: Chris Powell, VP and Corporate Controller of Beats by Dr. Dre, a California based designer and manufacturer of premium headphones (sorry, no free samples), Iñaki Zuazo, CIO of Industrias Juno, a building materials provider based in Spain, Kamran Moosa, Project Coordinator for Spartan Engineering, a provider of engineering and construction support services for an LPG storage project in Texas, and That’s a pretty diverse lineup and it will be interesting to hear the perspectives of both IT and financial project stakeholders. The session, “Oracle Accelerate Customer Case Studies: Rapid Deployment of Oracle Applications”, is at 3:30 pm on Wednesday, October 3, in the Concert room at the Palace Hotel. Oracle loves our hometown of San Francisco and it’s a great place to host Oracle OpenWorld. It’s now San Francisco’s largest conference and the city closes off Howard Street to better accommodate the attendees. Some Bay Area commuters may be inconvenienced for a few days by this closure but the conference brings about $100 million into the local economy. Now that’s a two-way street. More Oracle Accelerate at Oracle OpenWorld “Faster, Better, Cheaper Application Deployment with Oracle Business Accelerators”, Monday, October 1st, 10:45 a.m., Moscone West Room 3016 “Oracle Accelerate and Oracle Business Accelerators for Midsize Companies”, (partners only), Wednesday, October 3, 10:15 a.m., Marriott – Golden Gate B Visit the Oracle Accelerate and Oracle Business Accelerator Kiosk in the Moscone West Exhibit Grounds Download the Focus On Oracle Accelerate for Midsize Companies Focus document /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • 2D character controller in unity (trying to get old-school platformers back)

    - by Notbad
    This days I'm trying to create a 2D character controller with unity (using phisics). I'm fairly new to physic engines and it is really hard to get the control feel I'm looking for. I would be really happy if anyone could suggest solution for a problem I'm finding: This is my FixedUpdate right now: public void FixedUpdate() { Vector3 v=new Vector3(0,-10000*Time.fixedDeltaTime,0); _body.AddForce(v); v.y=0; if(state(MovementState.Left)) { v.x=-_walkSpeed*Time.fixedDeltaTime+v.x; if(Mathf.Abs(v.x)>_maxWalkSpeed) v.x=-_maxWalkSpeed; } else if(state(MovementState.Right)) { v.x= _walkSpeed*Time.fixedDeltaTime+v.x; if(Mathf.Abs(v.x)>_maxWalkSpeed) v.x=_maxWalkSpeed; } _body.velocity=v; Debug.Log("Velocity: "+_body.velocity); } I'm trying here to just move the rigid body applying a gravity and a linear force for left and right. I have setup a physic material that makes no bouncing and 0 friction when moving and 1 friction with stand still. The main problem is that I have colliders with slopes and the velocity changes from going up (slower) , going down the slope (faster) and walk on a straight collider (normal). How could this be fixed? As you see I'm applying allways same velocity for x axis. For the player I have it setup with a sphere at feet position that is the rigidbody I'm applying forces to. Any other tip that could make my life easier with this are welcomed :). P.D. While coming home I have noticed I could solve this applying a constant force parallel to the surface the player is walking, but don't know if it is best method.

    Read the article

  • One IP, One Port, Multiple Servers

    - by Adrian Godong
    I am looking for a solution to forward one public IP address and one specific port to different machines based on hostname (as of now, I need it only for HTTP). The current setup is NAT on a commodity router (it only provide simple public port to private IP address / port forwarding). I can add a Windows Server 2008 R2 machine before the router if required, but prefer not to do so. So ideally, I would like to have the current setup and the forwarding is done on one of the Windows Servers. Is it possible to do this?

    Read the article

  • VirtualBox guest network lost after host disconnects

    - by webjunk
    I am running VirtualBox both on a Snow Leopard OSX host machine and on a Windows Vista host machine. Whenever my host machines lose internet connection the guest machines seem to lose internet connectivity permanently even after the host connection to the Internet is reestablished. Resetting guest networking on the guest os, disconnecting cable via host virtualbox settings, and even restarting the guest OS do not help at all. The guest no longer can access the Internet. The only solution is restarting VirtualBox itself while the host is connected to the Internet. This really gets to be a pain when the host goes into sleep mode or I disconnect my laptop at work and then reconnect at home. Guests are setup with NAT networking. It affects guest machines with both Ubuntu and Windows XP OS'es. Is this expected behavior? Does anyone know of a fix? Or am I setup incorrectly?

    Read the article

  • Nginx + CodeIgniter + Invision power board rewrite rule probem.

    - by Ufuk
    Hello, I have a setup where I have a folder structre like: / /application /system /forum index.php My ngix configuration: if (!-f $request_filename) { rewrite ^/(.*)$ /index.php/$1 last; } location ~ /index.php { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /var/www/vhosts/mydomain.com/httpdocs/index.php; } This setup works correctly and redirects urls to correct controller of codeigniter yet it does also forward www.mydomain.com/forum to codeigniter, which eventually shows a 404 page of CodeIgniter. Does anyone know the correct configuration for my setups? Thank you.

    Read the article

  • Easy Credential Caching for Git

    A common question since launching our Git support is whether there is a way to cache your username and password so you don’t have to enter it on every push.  Well thanks to Andrew Nurse from the ASP.Net team, there is now a great solution for this! Credential Caching in Windows to the Rescue Using the Git extension point for credential caching, Andrew created an integration into the Windows Credentials store. After installing git-credential-winstore instead of getting that standard prompt for a username/password, you will get a Windows Security prompt. From here your credentials for CodePlex will be stored securely within the Windows Credential Store. Setup The setup is pretty easy. Download the application from Andrew's git-credential-winstore project. Launch the executable and select yes to have it prompt for credentials. That's it. Make sure you are running the latest version of msysgit, since the credential's API is fairly new. Thanks to Andrew for sharing his work.  If you have suggestions or improvements you can fork the code here.

    Read the article

  • automysqlbackup not working weirdly on shared host

    - by KPL
    I'm on HostGator and I have not-root-level SSH access. I manually setup automysqlbackup in a /home/user/automysqlbackup folder. Created a file called runbackup in the same folder, chmod +x'ed it. The content : /home/user/automysqlbackup/automysqlbackup /home/user/automysqlbackup/myserver.conf Now, when I run ./runbackup from shell, no email is sent. I've setup a daily cron job. The crontab line reads - 24 06 * * * /home/user/automysqlbackup/runbackup When the crontab is run, I do get an email, but, the subject is WARNING: Error Reported - MySQL Backup Log and SQL Files for localhost - 012-03-19_06h24m The email does have an attachment, but it just has the SQL for creating tables. No data in the several rows at all. I don't know what wrong I'm doing and this is freaking me out! I tried writing a custom script as well, using this guide, but mutt doesn't send email, then. The OS used by HostGator is CentOS.

    Read the article

  • Deploying BAM Data Control Application to WLS server

    - by [email protected]
    var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-15829414-1"); pageTracker._trackPageview(); } catch(err) {} Typically we would test our ADF pages that use BAM Data control using integrated wls server (ADRS). If we have to deploy this same application to a standalone WLS we have to make sure we have the BAM server connection created in WLS.unless we do that we may face runtime errors.In Development mode of WLS(Reference) For development-mode WebLogic Server, you can set the mode to OVERWRITE to test user names and passwords. You can set the mode by running setDomainEnv.cmd or setDomainEnv.sh with the following option added to the command. Add the following to the JAVA_PROPERTIES entry in the <FMW_HOME>/user_projects/domains/<yourdomain>/bin/setDomainEnv.sh file: -Djps.app.credential.overwrite.allowed=true In Production mode of WLS Enable MDS Create and/or Register your MDS repository. For more details refer this Edit adf-config.xml from your application and add the following tag <adf-mds-config xmlns="http://xmlns.oracle.com/adf/mds/config">     <mds-config version="11.1.1.000">     <persistence-config>   <metadata-store-usages>     <metadata-store-usage default-cust-store="true" deploy-target="true" id="myRepos">     </metadata-store-usage>   </metadata-store-usages>   </persistence-config>           </mds-config>  </adf-mds-config>Deploy the application to WLS server after picking the appropriate repository during deployment from the MDS Repository dialog that pops up Enterprise Manager (Use these steps if using a version prior to 11gR1 PS1 release of JDeveloper) Go to EM (http://<host>:<port>/EMIn the left pane, deployments select Application1(your application)In the right pane, top dropdown select "System Mbean Browser->oracle.adf.share.connections->Server: AdminServer->Server: AdminServer->Application:<Appname>->ADFConnections"Right pane click "Operations->CreateConnection"Enter Connection Type as "BAMConnection"Enter the connection name same as the one defined in JdevClick "Invoke"Click "Return"Click on Operation->SaveNow in the ADFConnections in the navigator, select the connection just created and enter all the configuration details.Save and run the page. Enterprise Manager (Use these steps or the steps above if using 11gR1 PS1 or newer) Go to EM (http://<host>:<port>/EMIn the left pane, deployments select Application1(your application)In the right pane, click on "Application Deployment" to invoke to dropdown. In that select "ADF -> Configure ADF Connections"Select Connection Type as "BAM" from the drop downEnter Connection Type as to be the same as the one defined in JDevClick on "Create Connection". This should add a new row below under "BAM Connections"Select the new connection and click on the "Edit" icon. This should bring up a dialogSpecific appropriate values for all connection parameters - Username, password, BAM Server Host, BAM Server Port, Webtier Server Host, Webtier Server Port and BAM Webtier Protocol - and then click on OK to dismiss the dialogClick on "Apply"Run the page page.

    Read the article

  • diagnostic multicast issue using wireshark

    - by Abruzzo Forte e Gentile
    I have a network that is setup for multicast traffic. My setup is the following -Machine A : a server generates multicast traffic. -Machine A : few clients subscribing to that multicast traffic -Machine B : few clients subscribing to that multicast traffic # Address I am using IP : 239.193.0.21 PORT: 20401 The clients in machine A , even if they join the group (I can see IGMP messages through wireshark), don't receive any data while (and this is the funny part) machine B,C and D receive everything. I sorted that issue by completely disabling Linux firewall. Before doing that, I enabled the multicast on the firwall ('reject all'). iptables -A INPUT -m addrtype --src-type MULTICAST -j ACCEPT My question is the following: what I can check in wireshark that can help me in spot such firewall issues in the futures? For TCP/IP I realize by using ping and looking at ICMP packets rejected. What I can check/monitor for multicast? I am using LInux/Red-Hat Enterprise 6.2

    Read the article

  • How can I do Ubuntu Lucid Desktop installation with HTTP preseed file and desktop installation CD?

    - by netvope
    Installation media: ubuntu-10.04-desktop-i386.iso I tried a lot of different boot parameters, but either the installer ignored the preseed configuration, or it boot itself directly as LiveCD. An example of the boot parameters I've tried: auto url=http://mydomain.com/path/preseed.cfg boot=casper only-ubiquity initrd=/casper/initrd.lz quiet splash -- If I remove only-ubiquity, it boots as a LiveCD. If I remove boot=casper, it won't boot. If I add vga=normal locale=en_US console-setup/layoutcode=us console-setup/ask_detect=false interface=auto, it still can't do automatic install. If I remove auto, it's the same. What am I missing? From the apache log of the server hosting preseed.cfg, I see that the installer has no problems fetching the preseed file. My preseed file is almost identical to the one at https://help.ubuntu.com/10.04/installation-guide/example-preseed.txt. Moreover, I have run debconf-set-selections -c preseed.cfg to ensure that the preseed file is correct. Any ideas?

    Read the article

  • Segmentation Fault (11) with modwsgi on CentOS 5.7 when running pyramid app

    - by carbotex
    I'm getting Segmentation fault error when trying to access the "Hello World" pyramid app. This error only occurs when running against CentOS 5.7 setup, but no problem whatsoever when tested against OSX and Arch Linux. Could it be a CentOS specific issue? [error] [client 10.211.55.2] Premature end of script headers: pyramid.wsgi [notice] child pid 31212 exit signal Segmentation fault (11) I have tried to follow the troubleshooting guides posted here http://code.google.com/p/modwsgi/wiki/InstallationIssues which suggests that it might caused by missing Shared Library. A quick check reveals that shared library is not the issue. [centos57@localhost modules]$ ldd mod_wsgi.so linux-gate.so.1 => (0x00e6a000) libpython2.7.so.1.0 => /home/python/lib/libpython2.7.so.1.0 (0x0024c000) libpthread.so.0 => /lib/libpthread.so.0 (0x00da8000) libdl.so.2 => /lib/libdl.so.2 (0x00cd6000) libutil.so.1 => /lib/libutil.so.1 (0x00110000) libm.so.6 => /lib/libm.so.6 (0x0085c000) libc.so.6 => /lib/libc.so.6 (0x00682000) /lib/ld-linux.so.2 (0x0012b000) Then I found another clue that might be able to solve my problem. Unfortunately libexpat is not the source of the problem. http://code.google.com/p/modwsgi/wiki/IssuesWithExpatLibrary [centos57@localhost bin]$ ldd ~/httpd/bin/httpd | grep expat libexpat.so.1 => /usr/local/lib/libexpat.so.1 (0x00b00000) [centos57@localhost bin]$ strings /usr/local/lib/libexpat.so.1 | grep expat libexpat.so.1 expat_2.0.1 [centos57@localhost bin]$ python Python 2.7.2 (default, Nov 26 2011, 08:08:44) [GCC 4.1.2 20080704 (Red Hat 4.1.2-51)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pyexpat >>> pyexpat.version_info (2, 0, 0) >>> I've been pulling my hair out trying to figure out what I'm missing in my setup. Why the problem only occurs with CentOS? Here is the detailed setup: Apache 2.2.19 Python 2.7.2 mod_wsgi-3.3 /home/httpd/conf/extra/pyramid.wsgi from pyramid.paster import get_app application = get_app('/home/homecamera/hcadmin/root/production.ini', 'main') /home/httpd/conf/extra/modwsgi.conf LoadModule wsgi_module modules/mod_wsgi.so WSGIScriptAlias /myapp /home/root/test.wsgi <Directory /home/root> WSGIProcessGroup pyramid Order allow,deny Allow from all </Directory> # Use only 1 Python sub-interpreter. Multiple sub-interpreters # play badly with C extensions. WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On WSGIDaemonProcess pyramid user=daemon group=daemon processes=1 \ threads=4 \ python-path=/home/python/lib/python2.7/site-packages WSGIScriptAlias /hello /home/httpd/conf/extra/pyramid.wsgi <Directory /home/httpd/conf/extra> WSGIProcessGroup pyramid Order allow,deny Allow from all </Directory> Again this same setup works on OSX and Arch Linux but not on CentOS 5.7. Could someone out there point me to the right direction before I ran out of my hair. ==================================================================================== When apache started with gdb, I got a couple of warnings Reading symbols from /home/httpd/bin/httpd...done. Attaching to program: /home/httpd/bin/httpd, process 1821 warning: .dynamic section for "/lib/libcrypt.so.1" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations warning: .dynamic section for "/lib/libutil.so.1" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations gdb output. After hitting refresh button, to load pyramid. (gdb) cont Continuing. warning: .dynamic section for "/usr/lib/libgssapi_krb5.so.2" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations warning: .dynamic section for "/usr/lib/libkrb5.so.3" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations warning: .dynamic section for "/lib/libresolv.so.2" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x8edbb90 (LWP 1824)] 0x0814c120 in EVP_PKEY_CTX_dup () apache_error_log [info] mod_wsgi (pid=1821): Starting process 'pyramid' with threads=1. [info] mod_wsgi (pid=1821): Initializing Python. [info] mod_wsgi (pid=1821): Attach interpreter ''. [info] mod_wsgi (pid=1821): Create interpreter 'web.domain.com:20000|/hcadmin'. [info] [client 10.211.55.2] mod_wsgi (pid=1821, process='pyramid', application='web.domain.com:20000|/hcadmin'): Loading WSGI script '/home/httpd/conf/extra/pyramid.wsgi'. [error] hello 1

    Read the article

  • How do I install GRUB on a RAID system installation?

    - by root45
    I'm trying to setup and install Ubuntu on a RAID 1 setup. I have two disks, sdb and sdc. I've been following this guide https://help.ubuntu.com/community/Installation/SoftwareRAID which more or less works for getting everything set up and Ubuntu installed. The problem is at the end of the installation, it tries to install GRUB. By default it tries my "first disk", which gives a "fatal error". I've tried installing it on a specific partion, e.g. sdb1 as well as RAID devices, e.g. md0, md1, etc.. Nothing seems to work. Edit: The actual error is "Unable to install GRUB in /dev/sdb Executing 'grub-install '/dev/sdb' failed. This is a fatal error." Then I'm taken back to the main install menu. If I choose "Install the GRUB boot loader on a hard disk" option, I can pick the partition, but entering sdb2 or md1 gives the same error. So I went ahead an just didn't install GRUB, which means now I presumably have a working Ubuntu installation, but I can't boot it. I've tried booting from the LiveCD to install GRUB, but I can't chroot into my system because it doesn't seem to recognize that my disk is a Linux disk. There's an error about it being a RAID partition. So basically I would really like to know how you know to which device to install GRUB at installation, or at the very least, how to install it on to my system now. I suppose I should also mention that sda is a Windows 7 installation that I would like to keep around and be able to access at boot. Thanks for any help.

    Read the article

  • Blocking a distributed, consistent spam attack? Could it be something more serious?

    - by mattmcmanus
    I will do my best to try and explain this as it's strange and confusing to me. I posted a little while ago about a sustained spike in mysql queries on a VPS I had recently setup. It turned out to be a single post on a site I was developmenting. The post had over 30,000 spam comments! Since the site was one I was slowly building I hadn't configured the anti-spam comment software yet. I've since deleted the particular post which has given the server a break but the post's url keeps on getting hit. The frustrating thing is every hit is from a different IP. How do I even start to block/prevent this? Is this even something I need to worry about? Here are some more specific details about my setup, just to give some context: Ubuntu 8.10 server with ufw setup The site I'm building is in Drupal which now has Mollom setup for spam control. It wasn't configured before. The requests happen inconsistently. Sometimes it's every couple seconds and other times it's a an or so between hits. However it's been going on pretty much constantly like that for over a week. Here is a sample of my apache access log from the last 15 minutes just for the page in question: dev.domain-name.com:80 97.87.97.169 - - [28/Mar/2010:06:47:40 +0000] "POST http://dev.domain-name.com/comment/reply/3 HTTP/1.1" 404 5895 "http://dev.domain-name.com/blog/2009/11/23/another" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)" dev.domain-name.com:80 202.149.24.193 - - [28/Mar/2010:06:50:37 +0000] "POST /comment/reply/3 HTTP/1.1" 404 5895 "http://dev.domain-name.com/blog/2009/11/23/another" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)" dev.domain-name.com:80 193.106.92.77 - - [28/Mar/2010:06:50:39 +0000] "POST /comment/reply/3 HTTP/1.1" 404 5895 "http://dev.domain-name.com/blog/2009/11/23/another" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)" dev.domain-name.com:80 194.85.136.187 - - [28/Mar/2010:06:52:03 +0000] "POST /comment/reply/3 HTTP/1.1" 404 5895 "http://dev.domain-name.com/blog/2009/11/23/another" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)" dev.domain-name.com:80 220.255.7.13 - - [28/Mar/2010:06:52:14 +0000] "POST /comment/reply/3 HTTP/1.1" 404 5895 "http://dev.domain-name.com/blog/2009/11/23/another" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)" dev.domain-name.com:80 195.70.55.151 - - [28/Mar/2010:06:53:41 +0000] "POST /comment/reply/3 HTTP/1.1" 404 5895 "http://dev.domain-name.com/blog/2009/11/23/another" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)" dev.domain-name.com:80 71.91.4.31 - - [28/Mar/2010:06:56:07 +0000] "POST http://dev.domain-name.com/comment/reply/3 HTTP/1.1" 404 5895 "http://dev.domain-name.com/blog/2009/11/23/another" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)" dev.domain-name.com:80 98.209.203.170 - - [28/Mar/2010:06:56:10 +0000] "POST http://dev.domain-name.com/comment/reply/3 HTTP/1.1" 404 5895 "http://dev.domain-name.com/blog/2009/11/23/another" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)" dev.domain-name.com:80 24.255.137.159 - - [28/Mar/2010:06:56:19 +0000] "POST http://dev.domain-name.com/comment/reply/3 HTTP/1.1" 404 5895 "http://dev.domain-name.com/blog/2009/11/23/another" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)" dev.domain-name.com:80 77.242.20.18 - - [28/Mar/2010:07:00:15 +0000] "POST /comment/reply/3 HTTP/1.1" 404 5895 "http://dev.domain-name.com/blog/2009/11/23/another" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)" dev.domain-name.com:80 94.75.215.42 - - [28/Mar/2010:07:01:34 +0000] "POST /comment/reply/3 HTTP/1.0" 404 5895 "http://dev.domain-name.com/blog/2009/11/23/another" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)" dev.domain-name.com:80 89.115.2.128 - - [28/Mar/2010:07:03:20 +0000] "POST /comment/reply/3 HTTP/1.1" 404 5895 "http://dev.domain-name.com/blog/2009/11/23/another" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)" dev.domain-name.com:80 75.65.230.252 - - [28/Mar/2010:07:05:05 +0000] "POST http://dev.domain-name.com/comment/reply/3 HTTP/1.1" 404 5895 "http://dev.domain-name.com/blog/2009/11/23/another" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)" dev.domain-name.com:80 206.251.255.61 - - [28/Mar/2010:07:06:46 +0000] "POST /comment/reply/3 HTTP/1.0" 404 5895 "http://dev.domain-name.com/blog/2009/11/23/another" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)" dev.domain-name.com:80 213.194.120.14 - - [28/Mar/2010:07:07:22 +0000] "POST /comment/reply/3 HTTP/1.1" 404 5895 "http://dev.domain-name.com/blog/2009/11/23/another" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)" I understand this is an open ended question, but any help or insight you could give would be much appreciated.

    Read the article

  • What's required to configure Ubuntu to use a specific DNS server?

    - by ks78
    I've setup two Amazon EC2 instances, both running Ubuntu Server. One is configured as a DNS server running bind9, which will be used to allow EC2 instances to communicate with each other based on hostname rather than IP, since their private IPs may change. I think I have the DNS server setup correctly. I want to use the second EC2 instance to test the DNS server. Using Webmin, I've added the DNS server's private IP to the client's DNS Servers list and added the domain to the Search Domains list. I did have to edit /etc/dhcp3/dhclint.conf to make my changes stick. After reboot, I expected I'd be able to ping or nslookup the DNS server from the test client, but it can't seem to find the server. Is there something I'm missing? What's required to configure an Ubuntu client to use a DNS server? I just want to make sure I'm not missing something before I assume the server's the problem.

    Read the article

  • phpbb3 email settings for Zoho SMTP server

    - by SkylarMT
    I've spent a while guessing and googling, and haven't found an answer. In the past I setup my forums to send via my Gmail account, but spambots with fake emails have flooded my inbox, so I setup [email protected] with Zoho mail. Now I need to have my installation of phpbb3 send mass emails through the smtp.zoho.com mail server, and I can't figure out what settings I should use. The instructions on https://www.zoho.com/mail/help/pop-access.html are a little vague for anything that doesn't auto-detect the exact settings.

    Read the article

  • Teeing Off With Chris Leone at OpenWorld 2012

    - by Kathryn Perry
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Chris Leone, Senior Vice President, Oracle Applications Development Monday morning in downtown San Francisco - lots of sunshine, plenty of traffic, and sidewalks chocked full of people with fresh faces and blister free feet. Let the week of Oracle OpenWorld begin! For a great Applications start, Chris Leone packed the house with his Fusion Applications overview session - he covered strategy, scope, roadmaps, and customer successes. Fusion Apps, the world's best SaaS suite, is built on 100 percent standards. Chris talked about its information driven user experience, its innovative design, and the choice of deployment. People can run Fusion in the cloud, in a managed / hosted environment, or on premise -- or they can use a combination of these three models. About seventy percent of our customers go with SaaS. Release 5 of Fusion Apps will become available soon. The cadence of releases will be three times a year. The key drivers are to accelerate business success (no rip and replace) and to simplify business processes. Chris told the audience that organic Fusion is the centerpiece of our cloud solutions, rounded out with acquired offerings such as Taleo Recruiting and RightNow Customer Service. From the cloud solutions, customers can expect real time and predictive BI, social capabilities, choice of deployment, and more productivity because of a next generation UX called FUSE. Chris's demo showed a super easy, new UI that touts self service navigation. We'll blog about FUSE in the very near future. Chris said the next 365 days of Fusion Apps would include more localization, more industries, more power, more mobile, and more configurability. The audience was challenged to think hard about how Fusion could be part of their three-to-five year plans. Chris set up a great opportunity for you to follow up with your customers as they explore the possibilities.

    Read the article

  • Switch smarthosts in Exchange when using dual WAN

    - by mat0ng
    Hi everybody, I'd like to know if it's possible to setup Exchange 2003/2007 to switch between smarthosts, based on the WAN connection currently in use. Example scenario: I have two WAN connections with different ISP's. Exchange is running behind a dual WAN router. The router is setup to fall back to secondary WAN when primary WAN fails. The smarthost set in Exchange is the SMTP server of the primary ISP. Because the smarthost set in Exchange only allows relaying from IP's of the primary WAN sending mail won't work when the router falls back to the secondary WAN. Sending mail directly through DNS MX lookup is an option but the ISP's have dynamic IP's that get blacklisted a lot. Thanks in advance!

    Read the article

  • Which tools should I use to work efficiently on a remote server?

    - by Konstantin
    I rented a virtual ubuntu server and am trying to setup a web application. I am working from ubuntu. I know how to use the command line, but it is slow and as a visual person, I prefer graphical interfaces. So I connected with nautilus via ssh and was now able to browse the directories graphically. But my permissions are just those of "other", so I cannot do much without o+rwx. What tools do you use to do setup and administrate your servers? Should I write code locally rather then directly on the server and rsync it? EDIT: It is NOT a production server, I am simply fiddling around there.

    Read the article

  • share one vpn connection through windows rras with other clients

    - by KTYP
    I'm having a Cisco VPN connection to access our branch office. Since several people using the VPN I'm planing to install the VPN client on one of our server and share it through RRAS to save the licenses (like site - to - site). I install RRAS on a windows 2008 R2 (svrw2k8r2) and made the static routes on client computers. I could able to ping to the VPN's IP on svrw2k8r2 server but they can't seems to connect to the servers in other branch through this setup. Below is my setup My Branch Server: svrw2k8r2 - Windows 2008 R2 IP: 192.168.40.100/24 VPN IP: 10.0.100.12/8 Clients Win7 IP: 192.168.40.101 - 110 / 24 Other Branch Servers IP:10.10.0.10-20/24

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >