Search Results

Search found 6488 results on 260 pages for 'global thermonuclear war'.

Page 67/260 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • HTML E-Mail as fileattachment

    - by johnny
    I have a Problem with Outlook 2010. I sent an E-Mail with a Contactform with this Code: $message = ' <html> <head> <title>Anfrage ('.$cfg->get('global.page.title').')</title> <style type="text/css"> body { background:#FFFFFF; color:#000000; } #tbl td { background:#F0F0F0; vertical-align:top; } #tbl2 td { background:#E0E0E0; vertical-align:top; } </style> </head> <body> <p>Mail von der Webseite '.$cfg->get('global.page.title').'</p> <table id="tbl"> <tr> <td>Absender</td> <td>'.htmlspecialchars($_POST['name']).' ('.htmlspecialchars(trim($_POST['email'])).')</td> </tr> <tr id="tbl2"> <td>Betreff:</td> <td>'.htmlspecialchars($_POST["topic"]).'</td> </tr> <tr> <td>Nachricht:</td> <td>'.nl2br(htmlspecialchars($_POST["message"])).'</td> </tr> </table> </body> </html>'; $absender = $_POST['name'].' <'.$_POST['email'].'>'; $header = "From: $absender\n"; $header .= "Reply-To: $absender\n"; $header .= "X-Mailer: PHP/" . phpversion(). "\n"; $header .= "X-Sender-IP: " . $_SERVER["REMOTE_ADDR"] . "\n"; $header .= "Content-Type: text/html; Charset=utf-8"; $send_mail = mail($cfg->get('contact.toMailAdress'), "Anfrage (".$cfg->get('global.page.title').")", $message, $header); //$send_mail = mail("[email protected]", "Anfrage (".$cfg->get('global.page.title').")", $message, $header); $_SESSION['kontakt_form_time'] = time(); $tpl->assign("mail_sent", $send_mail); When I sent the email, doesn't shows the message. it generates a File named [NAME].h. The Message is in this File. How can I fix that, that the message shows in the E-Mail. Is this a Problem about the settings in Outlook?

    Read the article

  • Exchange 2003 Event ID 9337 - Offline Address Book

    - by Creepycc
    I have a support issue with an Exchange 2003 SP2 server. Event ID: 9337 Description: OALGen did not find any recipients in address list '\Global Address List'. This offline address list will not be generated. - Default Offline Address List When you preview the Global Address list within Exchange Systems Manager all is fine. Turning off cached mode on Outllok clients still errors Public fiolders / System folders are fine OABINTEG detects no issues, Pfdavadmin has checked all DACL The GAL and OAB have been deleted / recreated several times (With differnt names) DCDIAG, NETDIAG, ExchangeBPA all run without error Exhausted Google links diagnosing this issue, any suggestions?

    Read the article

  • What are the best possible ways to benchmark RAM (no-ECC) under linux / arm?

    - by moul
    I want to test integrity and global performances of no-ECC memory chips on a custom board Are there some tools that run under linux so I can monitor system and global temperature in the same time ? Are there some no-ECC specific tests to do in general ? EDIT 1: I already know how to monitor temperature (I use a special platform feature /sys/devices/platform/......../temp1_input). For now : wazoox : it works but I've to code my own tests Jason Huntley : ramspeed : does not work on arm stream benchmark : it works and is very fast, so I'll look if it's accurate and complete memtest : I'll try later, since it does not run directly from linux stress for fedora : I'll try later too, it's too problematic for me to install fedora now I found this distribution : http://www.stresslinux.org/sl/ I'll continue to check tools that run directly under linux without too big dependencies, after I'll maybe give a try to solutions like stresslinux, memtest, stress for fedora. Thanks for you answers, I'll continue to investigate

    Read the article

  • How to do port forwarding in D-link Glb802c?

    - by Manish
    I have some questions about port forwarding on my D-Link Router GLB-802C. For example: My local machine's IP is 117.1.1.81 My router's IP is 117.1.1.1 My Public (Web) IP is 117.16.1.1 My questions are: What will be my Global Address 'To'? What will be my Global Address 'From'? In Destination Port "From" and "To" what do I select in the drop down list and port no for forwarding HTTP traffic (for my website)? In Local Port, what do I select in drop down list and port no?

    Read the article

  • emacs not load (.emacs) configuration file

    - by ant2009
    I am using emacs on ubuntu 9.04. I have my emacs configuration file in ~/.emacs.d directory. My emacs file is called .emacs I have some basic configuration. However, everytime I start emacs it never loads my configuration and I have to keep doing it manually using i.e. M-X Transient-mark-mode My emacs file is listed below: ;; Emac customization file path (add-to-list 'load-path "~/emacs.d") ;; Use font lock mode (global-font-lock-mode t) ;; Highlight cursor line (global-hl-line-mode t) ;; Highlight selected region (transient-mark-mode t) I want to add to this configuration instead of manually added entries. Many thanks for any advice,

    Read the article

  • Samba + Centos (Share not working)

    - by mplacona
    I've done this a few times already, but for some reason this time it's not working. I have a folder called ruby (root:root - 0777) on /home/placona I'm trying to see this folder from my WindowsXP box, but keep getting permission denied. I can see the global share though, but whenever I try clicking on the ruby share, it won't let me in. Here's my smb.conf settings: [global] log file = /var/log/samba/samba.%m guest account = nobody netbios name = DEVBOX server string = DEVBOX CENTOS workgroup = WORKGROUP encrypt passwords = yes security = share max log size = 50 [ruby] path = /home/placona/ruby I want to be able to open this folder without using password (hence the guest account = nobody). I tried even with password, but never seems to work. Can anyone spot anything wrong with my settings?

    Read the article

  • Looking for best practise for writing a serial device communication app

    - by cdotlister
    I am pretty new to serial comms, but would like advise on how to best achieve a robust application which speak to and listens to a serial device. I have managed to make use of System.IO.SerialPort, and successfully connected to, sent data to and recieved from my device. The way things work is this. My application connects to the Com Port and opens the port.... I then connect my device to the com port, and it detects a connection to the PC, so sends a bit of text. it's really just copyright info, as well as the version of the firmware. I don't do anything with that, except display it in my 'activity' window. The device then waits. I can then query information, but sending a command such as 'QUERY PARAMETER1'. It then replies with something like: 'QUERY PARAMETER1\r\n\r\n76767\r\n\r\n' I then process that. I can then update it by sending 'SET PARAMETER1 12345', and it will reply with 'QUERY PARAMETER1\r\n\r\n12345\r\n\r\n'. All pretty basic. So, what I have done is created a Communication Class. this call is called in it's own thread, and sends data back to the main form... and also allows me to send messages to it. Sending data is easy. Recieving is a bit more tricky. I have employed the use of the datarecieved event, and when ever data comes in, I echo that to my screen. My problem is this: When I send a command, I feel I am being very dodgy in my handling. What I am doing is, lets say I am sending 'QUERY PARAMETER1'. I send the command to the device, I then put 'PARAMETER1' into a global variable, and I do a Thread.Sleep(100). On the data received, I then have a bit of logic that checks the incoming data, and sees if the string CONTAINS the value in the global variable. As the reply may be 'QUERY PARAMETER1\r\n\r\n76767\r\n\r\n', it sees that it contains my parameter, parses the string, and returns the value I am looking for, but placing it into another global variable. My sending method was sleeping for 100ms. It then wakes, and checks the returned global variable. If it has data... then I'm happy, and I process the data. Problem is... if the sleep is too short.. it will fail. And I feel it's flaky.. putting stuff into variables.. then waiting... The other option is to use ReadLine instead, but that's very blocking. So I remove the data received method, and instead... just send the data... then call ReadLine(). That may give me better results. There's no time, except when we connect initially, that data comes from the device, without me requesting it. So, maybe ReadLine will be simpler and safer? Is this known as 'Blocking' reads? Also, can I set a timeout? Hopefully someone can guide me.

    Read the article

  • How to do port forwarding in D-link Glb802c?

    - by Manish
    I have some questions about port forwarding on my D-Link Router GLB-802C. For example: My local machine's IP is 117.1.1.81 My router's IP is 117.1.1.1 My Public (Web) IP is 117.16.1.1 My questions are: What will be my Global Address 'To'? What will be my Global Address 'From'? In Destination Port "From" and "To" what do I select in the drop down list and port no for forwarding HTTP traffic (for my website)? In Local Port, what do I select in drop down list and port no?

    Read the article

  • Restrict Applications access to Device

    - by Dboy1612
    I have a PC that has the ability to handle running multiple game clients at once (specifically Tera Online). What I'd like to do is assign and/or restrict each clients access to a device (Gamepad) so that the actions from each device only effects the client I specify. After doing some research with Python's PyGame, I can see that a Gamepad essentially works like a Keyboard does by sending global key events to the entire system, and then the application reading those events. Question is, how can I make it not global? ONly have one application read one controller? Any help is appreciated!

    Read the article

  • Where in the registry are application registered hotkeys stored?

    - by Stuart Allen
    I need to find in the registry the key that is holding the hotkey CTRL-ALT-SHIFT-E. This is normally a keyboard shortcut in Photoshop, but I installed some program that did a global override of this sequence. I have uninstalled that program, but it did not clean up after itself. So, where can I find this in the registry, I just want to delete this key so the global override won't affect Photoshop anymore. I've google'd the crap out of this, there are various program that claim to assist, but none of them work.

    Read the article

  • Apache configuration file visualization/testing

    - by Matt Holgate
    Is there a tool available (or a debug mode built into Apache) that will allow me to interactively test and explain an Apache configuration for a given request? In particular, I'd like to be able to see which directives will apply when requesting a specific URL. For example, the output for the URL http://myserver.com/foo/bar/bar.html might look something like: Allow from 192.168.0.3 <-- From <Location /foo/bar> in myserver.com vhost Require valid user <-- From <Directory /var/www/foo> in global configuration Satisfy any <-- From <File bar.html> in global configuration [Background: why do I want this? The apache merging rules for configuration directives are quite complex to get right. It would be great to have a tool which allows you to check that your rules are doing exactly what you want, and would be a good learning tool]. If there isn't such a tool, is there a debug option in Apache that will log such information for each incoming request?

    Read the article

  • emacs does not open a file from argument and syntax highlight does not work

    - by Jus
    In my latest ubuntu box, When I type for example emacs ~/.bashrc, Emacs will start but not open .bashrc. This is true for any file I pass in. I've used Emacs for several years, and have never experienced this problem before. I added (global-font-lock-mode 1);; to my .emacs file, and Emacs does recognize it, for example. "(C++/; Abbrev)", but it won't do syntax highlighting. If you can solve any of these problems, it will be very appreciated. The following is my machine's configuration: uname -a Linux 2.6.35-28-generic-pae #49-Ubuntu SMP Tue Mar 1 14:58:06 UTC 2011 i686 GNU/Linux ~/.emacs (global-font-lock-mode 1);;

    Read the article

  • Remote access to internal machine (ssh port-forwarding)

    - by MacUsers
    I have a server (serv05) at work with a public ip, hosting two KVM guests - vtest1 & vtest2 - in two different private network - 192.168.122.0 & 192.168.100.0 - respectively, this way: [root@serv05 ~]# ip -o addr show | grep -w inet 1: lo inet 127.0.0.1/8 scope host lo 2: eth0 inet xxx.xxx.xx.197/24 brd xxx.xxx.xx.255 scope global eth0 4: virbr1 inet 192.168.100.1/24 brd 192.168.100.255 scope global virbr1 6: virbr0 inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 # [root@serv05 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr1 xxx.xxx.xx.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0 0.0.0.0 xxx.xxx.xx.62 0.0.0.0 UG 0 0 0 eth0 I've also setup IP FORWARDing and Masquerading this way: iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE iptables --append FORWARD --in-interface virbr0 -j ACCEPT All works up to this point. If I want to remote access vtest1 (or vtest2) first I ssh to serv05 and then from there ssh to vtest1. Is there a way to setup a port forwarding so that vtest1 can be accessed directly from the outside world? This is what I probably need to setup: external_ip (tcp port 4444) -> DNAT -> 192.168.122.50 (tcp port 22) I know it's easily do'able using a SOHO router but can't figure out how can I do that on a Linux box. Any help form you guys?? Cheers!! Update: 1 Now I've made ssh to listen to both of the ports: [root@serv05 ssh]# netstat -tulpn | grep ssh tcp 0 0 xxx.xxx.xx.197:22 0.0.0.0:* LISTEN 5092/sshd tcp 0 0 xxx.xxx.xx.197:4444 0.0.0.0:* LISTEN 5092/sshd and port 4444 is allowed in the iptables rules: [root@serv05 sysconfig]# grep 4444 iptables -A PREROUTING -i eth0 -p tcp -m tcp --dport 4444 -j DNAT --to-destination 192.168.122.50:22 -A INPUT -p tcp -m state --state NEW -m tcp --dport 4444 -j ACCEPT -A FORWARD -i eth0 -p tcp -m tcp --dport 4444 -j ACCEPT But I'm getting connection refused: maci:~ santa$ telnet serv05 4444 Trying xxx.xxx.xx.197... telnet: connect to address xxx.xxx.xx.197: Connection refused telnet: Unable to connect to remote host Any idea what's I'm still missing? Cheers!!

    Read the article

  • IIS returning plain Forbidden response. No HTTP code

    - by Alex Pineda
    I'm running a ServiceStack application on IIS. My regular services work fine and have not had any problems with permissions. My new project involves providing generated pdfs. I gave IIS_IUSRS read/write permissions to the Temp directory under my app directory. I also allow non SSL connections to this directory. When I browse to the file which ServiceStack is supposed to automatically serve up (eg. http://ryu.com/Temp/201310171723337631.pdf ) I get this: Forbidden Request.HttpMethod: GET Request.PathInfo: Request.QueryString: Request.RawUrl: /ryu/Temp/201310171723337631.pdf App.IsIntegratedPipeline: True App.WebHostPhysicalPath: C:\inetpub\ryu App.WebHostRootFileNames: [global.asax,global.asax.cs,web.config,bin,temp] Now this doesn't look like a ServiceStack error message, more like IIS, but I'm not certain as to how to get to the bottom of this. Authorization settings are Allow All.

    Read the article

  • Why "scope link" ipv6 address can be pinged via interfaces which they are not active on

    - by olagu
    [root@2_01 ~]# /sbin/ip -6 addr show pubeth0 inet6 2001:1::6/64 scope global inet6 2001:1::1/64 scope global inet6 fe80::20c:29ff:fe69:f9e8/64 scope link [root@v2_01 ~]# /sbin/ip -6 addr show pubeth1 inet6 fe80::20c:29ff:fe69:f906/64 scope link [root@2_01 ~]# ping6 fe80::20c:29ff:fe69:f9e8%pubeth1 PING fe80::20c:29ff:fe69:f9e8%pubeth1(fe80::20c:29ff:fe69:f9e8) 56 data bytes 64 bytes from fe80::20c:29ff:fe69:f9e8: icmp_seq=1 ttl=64 time=0.259 ms --- fe80::20c:29ff:fe69:f9e8%pubeth1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 286ms rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms [root@2_01 ~]# ping6 fe80::20c:29ff:fe69:f9e8%pubeth0 PING fe80::20c:29ff:fe69:f9e8%pubeth0(fe80::20c:29ff:fe69:f9e8) 56 data bytes 64 bytes from fe80::20c:29ff:fe69:f9e8: icmp_seq=1 ttl=64 time=0.057 ms --- fe80::20c:29ff:fe69:f9e8%pubeth0 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 390ms rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms Why can I ping6 "fe80::20c:29ff:fe69:f9e8" via pubeth1?

    Read the article

  • Is there any trick to join and use Windows 8/8.1 with Samba 4 (4.1.6)?

    - by tenshimsm
    It seems that Samba doesn't like at all. I've followed various tutorials and I can't get Windows 8 to work properly with a Ubuntu Server as domain controller. This week i've downloaded ubuntu 14.04 lts and set a fast domain configuration. As usual all other Windows version (XP and 7) work but the newest M$ nightmare doesn't. In this try it doesn't even join the domain, keeps saying the my username or password are wrong. My /etc/samba/smb.conf # Global parameters [global] workgroup = DOMAIN realm = DOMAIN.LAN netbios name = DOM server role = active directory domain controller dns forwarder = 8.8.8.8 idmap_ldb:use rfc2307 = yes [netlogon] path = /var/lib/samba/sysvol/domain.lan/scripts read only = No [sysvol] path = /var/lib/samba/sysvol read only = No [test] directory mode = 0750 path = /SHARES/test read only = no Does anyone have a tutorial that really works? Because I've tried many, each one with different configurations that works only with the people that made them. And is there a way to import my old AD users, computers and ID in a way that I won't need to rejoin all computers?

    Read the article

  • Change Exchange Server Name Before Upgrade

    - by ffrugone
    I need to upgrade the Exchange Server from 2003 to 2010. I'm physically changing servers as well as software. I'm worried about redirecting the Outlook clients after the upgrade is going to be troublesome. So, I thought that before doing anything else, that I would change the name of the Exchange server on the client from 'server-name.domain.com' to 'mail.domain.com' and add an entry in dns that points 'mail.domain.com' to the same ip as 'server-name.domain.com'. However, even though I added 'mail.domain.com' to the dns, I cannot get the Exchange server to change to that on the client computers. I found out that the Outlook clients check the Global Catalog for the name of the Exchange server computer. My question is: can I change the Global Catalog address of the Exchange computer from 'server-name.domain.com' to 'mail.domain.com'? If so/not, is there a better way to do this? thanks.

    Read the article

  • choose server backend to some URL with haproxy

    - by shingara
    To some URL I don't want use some server. So use other. Actually I have this haproxy configuration. global daemon log 127.0.0.1 local0 #log loghost local0 info maxconn 4096 #debug #quiet user haproxy group haproxy defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 balance roundrobin stats enable stats refresh 5s stats auth admin:123abc789xyz # Set up application listeners here. listen application 0.0.0.0:10000 server localhost 127.0.0.1:10100 weight 1 maxconn 5 check server externe 127.0.0.1:10101 weight 1 maxconn 5 check By example I want all url to /users be served only by server localhost, not by externe.

    Read the article

  • Internationalize WebCenter Portal - Content Presenter

    - by Stefan Krantz
    Lately we have been involved in engagements where internationalization has been holding the project back from success. In this post we are going to explain how to get Content Presenter and its editorials to comply with the current selected locale for the WebCenter Portal session. As you probably know by now WebCenter Portal leverages the Localization support from Java Server Faces (JSF), in this post we will assume that the localization is controlled and enforced by switching the current browsers locale between English and Spanish. There is two main scenarios in internationalization of a content enabled pages, since Content Presenter offers both presentation of information as well as contribution of information, in this post we will look at how to enable seamless integration of correct localized version of the back end content file and how to enable the editor/author to edit the correct localized version of the file based on the current browser locale. Solution Scenario 1 - Localization aware content presentation Due to the amount of steps required to implement the enclosed solution proposal I have decided to share the solution with you in group components for each facet of the solution. If you want to get more details on each step, you can review the enclosed components. This post will guide you through the steps of enabling each component and what it enables/changes in each section of the system. Enable Content Presenter Customization By leveraging a predictable naming convention of the data files used to hold the content for the Content Presenter instance we can easily develop a component that will dynamically switch the name out before presenting the information. The naming convention we have leverage is the industry best practice by having a shared identifier as prefix (ContentABC) and a language enabled suffix (_EN) (_ES). So the assumption is that each file pair in above example should look like following:- English version - (ContentABC_EN)- Spanish version - (ContentABC_ES) Based on above theory we can now easily regardless of the primary version assigned to the content presenter instance switch the language out by using the localization support from JSF. Below java bean (oracle.webcenter.doclib.internal.view.presenter.NLSHelperBean) is enclosed in the customization project available for download at the bottom of the post: 1: public static final String CP_D_DOCNAME_FORMAT = "%s_%s"; 2: public static final int CP_UNIQUE_ID_INDEX = 0; 3: private ContentPresenter presenter = null; 4:   5:   6: public NLSHelperBean() { 7: super(); 8: } 9:   10: /** 11: * This method updates the configuration for the pageFlowScope to have the correct datafile 12: * for the current Locale 13: */ 14: public void initLocaleForDataFile() { 15: String dataFile = null; 16: // Checking that state of presenter is present, also make sure the item is eligible for localization by locating the "_" in the name 17: if(presenter.getConfiguration().getDatasource() != null && 18: presenter.getConfiguration().getDatasource().isNodeDatasource() && 19: presenter.getConfiguration().getDatasource().getNodeIdDatasource() != null && 20: !presenter.getConfiguration().getDatasource().getNodeIdDatasource().equals("") && 21: presenter.getConfiguration().getDatasource().getNodeIdDatasource().indexOf("_") > 0) { 22: dataFile = presenter.getConfiguration().getDatasource().getNodeIdDatasource(); 23: FacesContext fc = FacesContext.getCurrentInstance(); 24: //Leveraging the current faces contenxt to get current localization language 25: String currentLocale = fc.getViewRoot().getLocale().getLanguage().toUpperCase(); 26: String newDataFile = dataFile; 27: String [] uniqueIdArr = dataFile.split("_"); 28: if(uniqueIdArr.length > 0) { 29: newDataFile = String.format(CP_D_DOCNAME_FORMAT, uniqueIdArr[CP_UNIQUE_ID_INDEX], currentLocale); 30: } 31: //Replacing the current Node datasource with localized datafile. 32: presenter.getConfiguration().getDatasource().setNodeIdDatasource(newDataFile); 33: } 34: } With this bean code available to our WebCenter Portal implementation we can start the next step, by overriding the standard behavior in content presenter by applying a MDS Taskflow customization to the content presenter taskflow, following taskflow customization has been applied to the customization project attached to this post:- Library: WebCenter Document Library Service View- Path: oracle.webcenter.doclib.view.jsf.taskflows.presenter- File: contentPresenter.xml Changes made in above customization view:1. A new method invocation activity has been added (initLocaleForDataFile)2. The method invocation invokes the new NLSHelperBean3. The default activity is moved to the new Method invocation (initLocaleForDataFile)4. The outcome from the method invocation goes to determine-navigation (original default activity) The above changes concludes the presentation modification to support a compatible localization scenario for a content driven page. In addition this customization do not limit or disables the out of the box capabilities of WebCenter Portal. Steps to enable above customization Start JDeveloper and open your WebCenter Portal Application Select "Open Project" and include the extracted project you downloaded (CPNLSCustomizations.zip) Make sure the build out put from CPNLSCustomizations project is a dependency to your Portal project Deploy your Portal Application to your WC_CustomPortal managed server Make sure your naming convention of the two data files follow above recommendation Example result of the solution: Solution Scenario 2 - Localization aware content creation and authoring As you could see from Solution Scenario 1 we require the naming convention to be strictly followed, this means in the hands of a user with limited technology knowledge this can be one of the failing links in this solutions. Therefore I strongly recommend that you also follow this part since this will eliminate this risk and also increase the editors/authors usability with a magnitude. The current WebCenter Portal Architecture leverages WebCenter Content today to maintain, publish and manage content, therefore we need to make few efforts in making sure this part of the architecture is on board with our new naming practice and also simplifies the creation of content for our end users. As you probably remember the naming convention required a prefix to be common so I propose we enable a new component that help you auto name the content items dDocName (this means that the readable title can still be in a human readable format). The new component (WCP-LocalizationSupport.zip) built for this scenario will enable a couple of things: 1. A new service where a sequential number can be generate on request - service name: GET_WCP_LOCALE_CONTENTID 2. The content presenter is leveraging a specific function when launching the content creation wizard from within Content Presenter. Assumption is that users will create the content by clicking "Create Web Content" button. When clicking the button the wizard opened is actually running in side of WebCenter Content server, file executed (contentwizard.hcsp). This file uses JSON commands that will generate operations in the content server, I have extend this file to create two identical data files instead of one.- First it creates the English version by leveraging the new Service and a Global Rule to set the dDocName on the original check in screen, this global rule is available in a configuration package attached to this blog (NLSContentProfileRule.zip)- Secondly we run a set of JSON javascripts to create the Spanish version with the same details except for the name where we replace the suffix with (_ES)- Then content creation wizard ends with its out of the box behavior and assigns the Content Presenter instance the English versionSee Javascript markup below - this can be changed in the (WCP-LocalizationSupport.zip/component/WCP-LocalizationSupport/publish/webcenter) 1: //---------------------------------------A-TEAM--------------------------------------- 2: WCM.ContentWizard.CheckinContentPage.OnCheckinComplete = function(returnParams) 3: { 4: var callback = WCM.ContentWizard.CheckinContentPage.checkinCompleteCallback; 5: WCM.ContentWizard.ChooseContentPage.OnSelectionComplete(returnParams, callback); 6: // Load latest DOC_INFO_SIMPLE 7: var cgiPath = DOCLIB.config.httpCgiPath; 8: var jsonBinder = new WCM.Idc.JSONBinder(); 9: jsonBinder.SetLocalDataValue('IdcService', 'DOC_INFO_SIMPLE'); 10: jsonBinder.SetLocalDataValue('dID', returnParams.dID); 11: jsonBinder.Send(cgiPath, $CB(this, function(http) { 12: var ret = http.GetResponseText(); 13: var binder = new WCM.Idc.JSONBinder(ret); 14: var dDocName = binder.GetResultSetValue('DOC_INFO', 'dDocName', 0); 15: if(dDocName.indexOf("_") > 0){ 16: var ssBinder = new WCM.Idc.JSONBinder(); 17: ssBinder.SetLocalDataValue('IdcService', 'SS_CHECKIN_NEW'); 18: //Additional Localization dDocName generated 19: ssBinder.SetLocalDataValue('dDocName', getLocalizedDocName(dDocName, "es")); 20: ssBinder.SetLocalDataValue('primaryFile', 'default.xml'); 21: ssBinder.SetLocalDataValue('ssDefaultDocumentToken', 'SSContributorDataFile'); 22:   23: for(var n = 0 ; n < binder.GetResultSetFields('DOC_INFO').length ; n++) { 24: var field = binder.GetResultSetFields('DOC_INFO')[n]; 25: if(field != 'dID' && 26: field != 'dDocName' && 27: field != 'dID' && 28: field != 'dReleaseState' && 29: field != 'dRevClassID' && 30: field != 'dRevisionID' && 31: field != 'dRevLabel') { 32: ssBinder.SetLocalDataValue(field, binder.GetResultSetValue('DOC_INFO', field, 0)); 33: } 34: } 35: ssBinder.Send(cgiPath, $CB(this, function(http) {})); 36: } 37: })); 38: } 39:   40: //Support function to create localized dDocNames 41: function getLocalizedDocName(dDocName, lang) { 42: var result = dDocName.replace("_EN", ("_" + lang)); 43: return result; 44: } 45: //---------------------------------------A-TEAM--------------------------------------- 3. By applying the enclosed NLSContentProfileRule.zip, the check in screen for DataFile creation will have auto naming enabled with localization suffix (default is English)You can change the default language by updating the GlobalNlsRule and assign preferred prefix.See Rule markup for dDocName field below: <$executeService("GET_WCP_LOCALE_CONTENTID")$><$dprDefaultValue=WCP_LOCALE.LocaleContentId & "_EN"$> Steps to enable above extensions and configurations Install WebCenter Component (WCP-LocalizationSupport.zip), via the AdminServer in WebCenter Content Administration menus Enable the component and restart the content server Apply the configuration bundle to enable the new Global Rule (GlobalNlsRule), via the WebCenter Content Administration/Config Migration Admin New Content Creation Experience Result Content EditingContent editing will by default be enabled for authoring in the current select locale since the content file is selected by (Solution Scenario 1), this means that a user can switch his browser locale and then get the editing experience adaptable to the current selected locale. NotesA-Team are planning to post a solution on how to inline switch the locale of the WebCenter Portal Session, so the Content Presenter, Navigation Model and other Face related features are localized accordingly. Content Presenter examples used in this post is an extension to following post:https://blogs.oracle.com/ATEAM_WEBCENTER/entry/enable_content_editing_of_iterative Downloads CPNLSCustomizations.zip - WebCenter Portal, Content Presenter Customization https://blogs.oracle.com/ATEAM_WEBCENTER/resource/stefan.krantz/CPNLSCustomizations.zip WCP-LocalizationSupport.zip - WebCenter Content, Extension Component to enable localization creation of files with compliant auto naminghttps://blogs.oracle.com/ATEAM_WEBCENTER/resource/stefan.krantz/WCP-LocalizationSupport.zip NLSContentProfileRule.zip - WebCenter Content, Configuration Update Bundle to enable Global rule for new check in naming of data fileshttps://blogs.oracle.com/ATEAM_WEBCENTER/resource/stefan.krantz/NLSContentProfileRule.zip

    Read the article

  • What’s New in JD Edwards EnterpriseOne Release 9.1

    - by Breanne Cooley
    Oracle JD Edwards EnterpriseOne 9.1 offers customers significant updates to usability and business processes to improve productivity and bolster business value. This release addresses critical user needs, while delivering key enhancements in several areas, including:  New User Experience Release 9.1 offers significant enhancements to the user experience. New Web 2.0 features reduce task time and enable you to access meaningful information. Enhanced Reporting Oracle’s JD Edwards EnterpriseOne One View Reporting is a breakthrough solution that allows business users to create interactive reports - without IT support. Industry Specific Functionality This new release delivers key enhancements for the consumer goods, real estate management and manufacturing and distribution industries. Enhanced Support for Global Operations JD Edwards EnterpriseOne 9.1 supports global operations with several new features, including enhancements that consider the entire ERP business process associated with managing country of origin requirements. Productivity Features This new release offers new more tightly integrated business processes and other productivity advancements like improved data access and enhanced financial controls. Want to find out more? ü Bookmark the JD Edwards EnterpriseOne web page ü Listen to the Podcast: Announcing JD Edwards EnterpriseOne 9.1  ü Watch the Demo: JD Edwards EnterpriseOne 9.1 Features Demo ü Watch the Demo: JD Edwards EnterpriseOne One View Reporting Demo  ü Review the Data Sheet: JD Edwards EnterpriseOne Tools 9.1  New Training JD Edwards EnterpriseOne 9.1 training through Oracle University is designed to help you leverage these usability and productivity enhancements. The curriculum is aligned to the JD Edwards EnterpriseOne products and will teach your team how to efficiently and effectively implement and use your applications. Get started today! View available training and schedules.   -Jim Vonick, Oracle University Market Development Manager 

    Read the article

  • Using Oracle Enterprise Manager Ops Center to Update Solaris via Live Upgrade

    - by LeonShaner
    Introduction: This Oracle Enterprise Manager Ops Center blog entry provides tips for using Ops Center to update Solaris using Live Upgrade on Solaris 10 and Boot Environments on Solaris 11. Why use Live Upgrade? Live Upgrade (LU) can significantly reduce downtime associated with patching Live Upgrade avoids dropping to single-user mode for long periods of time during patching Live Upgrade relies on an Alternate Boot Environment (ABE)/(BE), which is patched while in multi-user mode; thereby allowing normal system operations to continue with the active BE, while the alternate BE is being patched Activating an newly patched (A)BE is essentially a reboot; therefore the downtime is ~= reboot Admins can easily revert to the prior Boot Environment (BE) as a safeguard / fallback. Why use Ops Center to patch via Live Upgrade, Alternate Boot Environments, and Solaris 11 equivalents? All the benefits of Ops Center's extensive patch and package knowledge base can be leveraged on top of Live Upgrade Ops Center can orchestrate patching based on Live Upgrade and Solaris 11 features, which all works together to minimize downtime Ops Centers advanced inventory and reporting features assurance that each OS is updated to a verifiable, consistent standard, rather than relying on ad-hoc (error prone) procedures and scripts Ops Center gives admins control over the boot environment specifications or they can let Ops Center decide when a BE is necessary, thereby reducing complexity and lowering the opportunity for user error Preparing to use Live Upgrade-like features in Solaris 11 Requirements and information you should know: Global Zone Root file-systems must be separate from Solaris Container / Zone filesystems Solaris 11 has features which are similar in concept to Live Upgrade on Solaris 10, but differ greatly in implementationImportant distinctions: Solaris 11 assumes ZFS root Solaris 11 adds Boot Environments (BE's) as an integrated feature (see beadm) Solaris 11 BE's avoid single-user patching (vs. Solaris 10 w/ ZFS snapshot=ABE). Solaris 11 Image Packaging System (IPS) has hooks for BE creation, as needed Solaris 11 allows pkgs to be installed + upgraded in alternate BE (e.g. instead of the live system) but it is controlled on a per-pkg basis Boot Environments are activated across a reboot; instead of spending long periods installing + upgrading packages in single user mode. Fallback to a prior BE is a function of the BE infrastructure (a la beadm). (Generally) Reboot + BE activation can be much much faster on Solaris 11 Preparing to use Live Upgrade on Solaris 10 Requirements and information you should know: Global Zone Root file-systems must be separate from Solaris Container / Zone filesystems Live Upgrade Pre-requisite patches must be applied before the first Live Upgrade Alternate Boot Environments are created (see "Pre-requisite Patches" section, below...) Solaris 10 Update 6 or newer on ZFS root is the practical starting point for Live Upgrade Live Upgrade with ZFS root is far more straight-forward than any scheme based on Alternative Boot Environments in slices or temporarily breaking mirrors Use Solaris best practices to upgrade the OS to at least Solaris 10 Update 4 (outside of Ops Center) UFS root can (technically) be used, but it is significantly more involved (e.g. discouraged) -- there are many reasons to move to ZFS while going through the process to update to Solaris 10 Update 6 or newer (out side of Ops Center) Recommendation: Start with Solaris 10 Update 6 or newer on ZFS root Recommendation: Start with Ops Center 12c or newer Ops Center 12c can automatically create your ABE's for you, without the need for custom scripts Ops Center 12c Update 2 avoids kernel panic on unpatched Solaris 10 update 9 (and older) -- unrelated to Live Upgrade, but more on the issue, below. NOTE: There is no magic!  If you have systems running Solaris 10 Update 5 or older on UFS root, and you don't know how to get them updated to Solaris 10 on ZFS root, then there are services available from Oracle Advanced Customer Support (ACS), which specialize in this area. Live Upgrade Pre-requisite Patches (Solaris 10) Certain Live Upgrade related patches must be present before the first Live Upgrade ABE's are created on Solaris 10.Use the following MOS Search String to find the “living document” that outlines the required patch minimums, which are necessary before using any Live Upgrade features: Solaris Live Upgrade Software Patch Requirements(Click above – the link is valid as of this writing, but search in MOS for the same "Solaris Live Upgrade Software Patch Requirements" string if necessary) It is a very good idea to check the document periodically and adapt to its contents, accordingly.IMPORTANT:  In case it wasn't clear in the above document, some direct patching of the active OS, including a reboot, may be required before Live Upgrade can be successfully used the first time.HINT: You can use Ops Center to determine what to expect for a given system, and to schedule the “pre-patching” during a maintenance window if necessary. Preparing to use Ops Center Discover + Manage (Install + Configure the Ops Center agent in) each Global Zone Recommendation:  Begin by using OCDoctor --agent-prereq to determine whether OS meets OC prerequisites (resolve any issues) See prior requirements and recommendations w.r.t. starting with Solaris 10 Update 6 or newer on ZFS (or at least Solaris 10 Update 4 on UFS, with caveats) WARNING: Systems running unpatched Solaris 10 update 9 (or older) should run the Ops Center 12c Update 2 agent to avoid a potential kernel panic The 12c Update 2 agent will check patch minimums and disable certain process accounting features if the kernel is not sufficiently patched to avoid the panic SPARC: 142900-05 Obsoleted by: 142900-06 SunOS 5.10: kernel patch 10 Oracle Solaris on SPARC (32-bit) X64: 142901-05 Obsoleted by: 142901-06 SunOS 5.10_x86: kernel patch 10 Oracle Solaris on x86 (32-bit) OR SPARC: 142909-17 SunOS 5.10: kernel patch 10 Oracle Solaris on SPARC (32-bit) X64: 142910-17 SunOS 5.10_x86: kernel patch 10 Oracle Solaris on x86 (32-bit) Ops Center 12c (initial release) and 12c Update 1 agent can also be safely used with a workaround (to be performed BEFORE installing the agent): # mkdir -p /etc/opt/sun/oc # echo "zstat_exacct_allowed=false" > /etc/opt/sun/oc/zstat.conf # chmod 755 /etc/opt/sun /etc/opt/sun/oc # chmod 644 /etc/opt/sun/oc/zstat.conf # chown -Rh root:sys /etc/opt/sun/oc NOTE: Remove the above after patching the OS sufficiently, or after upgrading to the 12c Update 2 agent Using Ops Center to apply Live Upgrade-related Pre-Patches (Solaris 10)Overview: Create an OS Update Profile containing the minimum LU-related pre-patches, based on the Solaris Live Upgrade Software Patch Requirements, previously mentioned. SIMULATE the deployment of the LU-related pre-patches Observe whether any of the LU-related pre-patches will require a reboot The job details for each Global Zone will advise whether a reboot step will be required ACTUALLY deploy the LU-related pre-patches, according to your change control process (e.g. if no reboot, maybe okay to do now; vs. must do later because of the reboot). You can schedule the job to occur later, during a maintenance window Check the job status for each node, resolving any issues found Once the LU-related pre-patches are applied, you can Ops Center to patch using Live Upgrade on Solaris 10 Using Ops Center to patch Solaris 10 with LU/ABE's -- the GOODS!(this is the heart of the tip): Create an OS Update Profile containing the patches that make up your standard build Use Solaris Baselines when possible Add other individual patches as needed ACTUALLY deploy the OS Update Profile Specify the appropriate Live Upgrade options, e.g. Synchronize the active BE to the alternate BE before patching Do not activate the BE after patching Check the job status for each node, resolving any issues found Activate the newly patched BE according to your change control process Activate = Reboot to the ABE, making the ABE the new active BE Ops Center does not separate LU activate from reboot, so expect a reboot! Check the job status for each node, resolving any issues found Examples (w/Screenshots) Solaris 10 and Live Upgrade: Auto-Create the Alternate Boot Environment (ZFS root only) ABE to be created on ZFS with name S10_12_07REC (Example) Uses built in feature to call “lucreate -n S10_12_07REC” behind scenes if not already present NOTE: Leave “lucreate” params blank (if you do specify options, the will be appended after -n $ABEName) Solaris 10 and Live Upgrade: Alternate Boot Environment Creation via Operational Profile (script) The Alternate Boot Environment is to be created via custom, user-supplied script, which does whatever is needed for the system where Live Upgrade will be used. Operational Profile, which provides the script to create an ABE: Very similar to the automatic case, but with a Script (Operational Profile), which is used to create the ABE Relies on user-supplied script in the form of an Operational Profile Could be used to prepare an ABE based on a UFS root in a slice, or on a separate device (e.g. by breaking a mirror first) – it is up to the script author to do the right thing! EXAMPLE: Same result as the ZFS case, but illustrating the Operational Profile (e.g. script) approach to call: # lucreate -n S10_1207REC NOTE: OC special variable is $ABEName Boot Environment Profile, which references the Operational Profile Script = Operational Profile on this screen Refers to Operational Profile shown in the previous section The user-supplied S10_Create_BE Operational Profile will be run The Operational Profile must send a non-zero exit code if there is a problem (so that the OS Update job will not proceed) Solaris 10 OS Update Profile (to provide the actual patch specifications) Solaris 10 Baseline “Recommended” chosen for “Install” Solaris 10 OS Update Plan (two-steps in this case) “Create a Boot Environment” + “Update OS” are chosen. Using Ops Center to patch Solaris 11 with Boot Environments (as needed) Create a Solaris 11 OS Update Profile containing the packages that make up your standard build ACTUALLY deploy the Solaris 11 OS Update Profile BE will be created if needed (or you can stipulate no BE) BE name will be auto-generated (if needed), or you may specify a BE name Check the job status for each node, resolving any issues found Check if a BE was created; if so, activate the new BE Activate = Reboot to the BE, making the new BE the active BE Ops Center does not separate BE activate from reboot NOTE: Not every Solaris 11 OS Update will require a new BE, so a reboot may not be necessary. Solaris 11: Auto BE Create (as Needed -- let Ops Center decide) BE to be created as needed BE to be named automatically Reboot (if necessary) deferred to separate step Solaris 11: OS Profile Solaris 11 “entire” chosen for a particular SRU Solaris 11: OS Update Plan (w/BE)  “Create a Boot Environment” + “Update OS” are chosen. Summary: Solaris 10 Live Upgrade, Alternate Boot Environments, and their equivalents on Solaris 11 can be very powerful tools to help minimize the downtime associated with updating your servers.  For very old Solaris, there are some important prerequisites to adhere to, but once the initial preparation is complete, Live Upgrade can be used going forward.  For Solaris 11, the built-in Boot Environment handling is leveraged directly by the Image Packaging System, and the result is a much more straight forward way to patch, and far fewer prerequisites to satisfy in getting there.  Ops Center simplifies using either approach, and helps you improve consistency from system to system, which ultimately helps you improve the overall up-time across all the Solaris systems in your environment. Please let us know what you think?  Until next time...\Leon-- Leon Shaner | Senior IT/Product ArchitectSystems Management | Ops Center Engineering @ Oracle The views expressed on this [blog; Web site] are my own and do not necessarily reflect the views of Oracle. For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • Scottish Visual Studio 2010 Launch event with Jason Zander

    - by Martin Hinshelwood
    Microsoft are hosting a launch event for Visual Studio 2010 on Friday 16th April in Edinburgh. The have managed to convince one of the head honchos from the Visual Studio product team to come to Scotland. With Scott Guthrie last week in Glasgow and now Jason Zander, Global General Manager for Visual Studio will be arriving in Edinburgh for the Launch event. There will be two speakers for the event, Jason will be up first and will be doing a session on Windows, Web, Cloud and Windows Phone 7 development with Visual Studio 2010. Second up is Giles Davis the UK’s Technical Specialist for Visual Studio ALM (formally Visual Studio Team System) who will be introducing the new Visual Studio 2010 Developer and tester collaboration features. LAUNCH AGENDA: 9.30am – 10.00am Arrival 10.00am - 11.30am Keynote & Q&A - Jason Zander, Global GM for Visual Studio 11.30am - 12.00pm Break 12.00pm - 1.00pm Developer & Tester Collaboration with Visual Studio 2010 - Giles Davies, Technical Specialist 1.00pm - 1.30pm Lunch DATE:              Friday, 16th April 2010 LOCATION: Microsoft Edinburgh, Waverley Gate, 2-4 Waterloo Place, Edinburgh, EH1 3EG I think Jason will be hanging out for the afternoon to answer questions and meet everyone. f you would like to attend, please email Nathan Davies on [email protected] with your name, company and email address   Technorati Tags: VS2010,TFS2010,Visual Studio,Visual Studio 2010

    Read the article

  • Don&rsquo;t apply for your first job somewhere; apply for an experience at Oracle.

    - by cristian.condurache(at)oracle.com
    Hi! My name is Stijn and I currently work as a Business Development Consultant for Oracle in Dublin since November 2010. I’m originally from Belgium and I graduated last year from the Vlerick Leuven Gent Management School. In many ways you could say I’m living the life I asked for: an international career with global organization. I’m unbelievably grateful however, because opportunities like this don’t come by the dozen. Actually, going through university and business school my dreams of an international career were clouded quite quickly. Following all the ‘right’ steps wasn’t enough. The lack of offers for, and trust in, new starters to take on a challenge like this was a reality check for me and many of my friends. It takes a company that recognizes the opportunity of recruiting talented individuals by offering them something they actually want: a first job based abroad! My job is focused on generating demand for Oracle products over the phone. In only a few months, the amazing things I’ve experienced, the people I’ve talked to, the learning experiences I’ve had in and outside of work are too many to list. From having CEO’s on the phone, to having meetings with 15 different nationalities, to getting settled from scratch in a new country… it’s something that builds you as a person. But don’t be fooled though, it’s on you - where it starts. Although Oracle gives you the best training and resources to do your job and Ireland is a playground for everything else, it’s you that is responsible. You are in control and much is expected. What you get in return however, is beyond incredible. If you are interested in joining the same team as Stijn, please visit http://campus.oracle.com or contact [email protected] Technorati Tags: Oracle,opportunity,global organisation,career,Business Development Consultant

    Read the article

  • So Singletons are bad, then what?

    - by Bobby Tables
    There has been a lot of discussion lately about the problems with using (and overusing) Singletons. I've been one of those people earlier in my career too. I can see what the problem is now, and yet, there are still many cases where I can't see a nice alternative - and not many of the anti-Singleton discussions really provide one. Here is a real example from a major recent project I was involved in: The application was a thick client with many separate screens and components which uses huge amounts of data from a server state which isn't updated too often. This data was basically cached in a Singleton "manager" object - the dreaded "global state". The idea was to have this one place in the app which keeps the data stored and synced, and then any new screens that are opened can just query most of what they need from there, without making repetitive requests for various supporting data from the server. Constantly requesting to the server would take too much bandwidth - and I'm talking thousands of dollars extra Internet bills per week, so that was unacceptable. Is there any other approach that could be appropriate here than basically having this kind of global data manager cache object? This object doesn't officially have to be a "Singleton" of course, but it does conceptually make sense to be one. What is a nice clean alternative here?

    Read the article

  • Podcast Show Notes: Toronto Architect Day Panel Discussion

    - by Bob Rhubart
    The latest Oracle Technology Network ArchBeat Podcast features a four-part series recorded live during the panel discussion at OTN Architect Day in Tornonto, April 21, 2011. More than 100 people attended the event, and the audience tossed a lot of great questions at a terrific panel. Listen for yourself... Listen to Part 1 Panel introduction and a discussion of the typical characteristics of Cloud early-adopters. Listen to Part 2 (June 22) The panelists respond to an audience question about what happens when data in the Cloud crosses international borders. Listen to Part 3 (June 29) The panel discusses public versus private cloud as the best strategy for small or start-up businesses. Listen to Part 4 (July 6) The panel responds to an audience question about how cloud computing changes performance testing paradigms. The Architect Day panel includes (listed alphabetically): Dr. James Baty: Vice President, Oracle Global Enterprise Architecture Program [LinkedIn] Dave Chappelle: Enterprise Architect, Oracle Global Enterprise Architecture Program [LinkedIn] Timothy Davis: Director, Enterprise Architecture, Oracle Enterprise Solutions Group [LinkedIn] Michael Glas: Director, Enterprise Architecture, Oracle [LinkedIn] Bob Hensle: Director, Oracle [LinkedIn] Floyd Marinescu: Co-founder & Chief Editor of InfoQ.com and the QCon conferences [LinkedIn | Twitter | Homepage] Cary Millsap: Oracle ACE Director; Founder, President, and CEO at Method R Corporation [LinkedIn | Blog | Twitter] Coming Soon IASA CEO Paul Preiss talks about architecture as a profession. Thomas Erl and Anne Thomas Manes discuss their new book SOA Governance: Governing Shared Services On-Premise & in the Cloud A discussion of women in architecture Stay tuned: RSS

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >