Search Results

Search found 7776 results on 312 pages for 'configure'.

Page 193/312 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • Oracle Virtualization Friday Spotlight - October 18, 2013

    - by Monica Kumar
    Opening The Oracle VM Templates Blackbox Oracle VM Templates give you the efficiency of speed and the assurance of no guess work. For those in the know, Oracle VM Guest Additions is a great way to empower you to do more interesting things with the Templates. Today’s blog article is to share the secrets with those who are not content with just treating Oracle VM Templates as a black box. Oracle VM Guest Additions is a set of packages that can be installed on the guest operating system of a virtual machine running in the Oracle VM environment. These packages provide the tools to allow bi-directional communication directly between the Oracle VM Manager and the operating system running within the virtual machine. OK here’s where the ‘power-user’ part comes in…. This gives your fine-grained control over the configuration and behavior of components running within the virtual machine directly from Oracle VM Manager. You now have the ability to see and direct what goes on inside your VM from Oracle VM Manager. Get a reporting on IP addressing Use the template configuration facility to automatically configure virtual machines as they are first started Send messages directly to a virtual machine to trigger programmed events Query a virtual machine to obtain information pertaining to previous messages Enough of the theory! To get hands-on how-to’s and talk directly with the product expert on Oracle VM Guest Additions, Robbie de Meyer, or Oracle VM Templates for Oracle Database and RAC Template expert Saar Maoz, join us for the Oct 24th live webcast. You can also read more about the Oracle VM Guest Additions in the whitepaper.

    Read the article

  • format/build raid 5 with one 4k drive, three 512b

    - by skidawgz
    I have 4 WD 1TB drives which I want to 4x1TB Raid5. I am not sure what course of action to take next. How do I configure my 4th drive (sde) to align with the rest? Will this affect performance? I rcv this msg (which brings me here to ask these question): The device presents a logical sector size that is smaller than the physical sector size. Aligning to a physical sector (or optimal I/O) size boundary is recommended, or performance may be impacted. fdisk -l shows: Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 81 heads, 63 sectors/track, 382818 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xf324ba09 Device Boot Start End Blocks Id System /dev/sdb1 2048 1953525167 976761560 fd Linux raid autodetect Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 81 heads, 63 sectors/track, 382818 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x38bcc1f0 Device Boot Start End Blocks Id System /dev/sdc1 2048 1953525167 976761560 fd Linux raid autodetect Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 81 heads, 63 sectors/track, 382818 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x570f77e7 Device Boot Start End Blocks Id System /dev/sdd1 2048 1953525167 976761560 fd Linux raid autodetect Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0xeb665e7b Device Boot Start End Blocks Id System

    Read the article

  • Dualboot 12.04/windows 7 After installation from USB reboot straight to Windows with no option to select ubuntu and no boot loader

    - by Alkatraz
    windows 7 home premium intel i5 2500k CPU ASUSP8Z68-V PRO Motherboard GeForce GTX 570 GPU corsair 120Gb SSD (windows 7 os) WD 1tb HDD I select the USB drive in the BIOS and boot to it and choose install. i select to manually configure partitions, partition the 200gb of unallocated space on my 1Tb HDD into 16Gb swap file 30Gb / extf4 and 154Gb /home extf4. I make sure that the boot loader is installed to corsair 120Gb SSD (where the windows boot is) and installation goes smoothly. When i reboot after install it runs through bios strait into windows. I have tried upwards of a dozen times and i have also tried with linuxmint. I have also redownloaded the ISO and used two different programs to create the live usb. the installation seems to go well as i can see the partitions i have created in the windows device manager after install http://imgur.com/Wp0V1 I currently run lubuntu on my laptop but it is not a dual boot. i'm assuming this is a boot loader issue and i am assuming that inside those partitioned files in my screenshot there is a working OS of ubuntu 12.04 i just have no way of getting to it.

    Read the article

  • Can not access Internet (DNS names do not resolve) after update today

    - by Aras
    I have been using Precise for a few weeks now for work with no problem. Today, I am not able to access any website using either wired or wireless connections. I installed the updates today which included nautilus, xserver, and a new kernel (3.2.0-24). After restarting I no longer was able to browse the Internet using firefox or chrome. Trying to ping google in terminal gives ping: unknown host google.ca I have tried: Connecting to wireless or wired networks (both working on other machines) Restart the machine and boot with previous Kernel Manually configure opendns on my wired connection Restart the network and the laptop and the wireless card Without any success so far. I am not sure where to go next. Please let me know the cause of the issue or help me troubleshoot it. Note that the laptop does receive an ip address, and it can ping ip address of google.ca (74.125.127.94) but not the domain name, or any domain name for that matter. This system was upgraded from 11.10 to 12.04 more two weeks ago.

    Read the article

  • Windows Azure Virtual Machines - Make Sure You Follow the Documentation

    - by BuckWoody
    To create a Windows Azure Infrastructure-as-a-Service Virtual Machine you have several options. You can simply select an image from a “Gallery” which includes Windows or Linux operating systems, or even a Windows Server with pre-installed software like SQL Server. One of the advantages to Windows Azure Virtual Machines is that it is stored in a standard Hyper-V format – with the base hard-disk as a VHD. That means you can move a Virtual Machine from on-premises to Windows Azure, and then move it back again. You can even use a simple series of PowerShell scripts to do the move, or automate it with other methods. And this then leads to another very interesting option for deploying systems: you can create a server VHD, configure it with the software you want, and then run the “SYSPREP” process on it. SYSPREP is a Windows utility that essentially strips the identity from a system, and when you re-start that system it asks a few details on what you want to call it and so on. By doing this, you can essentially create your own gallery of systems, either for testing, development servers, demo systems and more. You can learn more about how to do that here: http://msdn.microsoft.com/en-us/library/windowsazure/gg465407.aspx   But there is a small issue you can run into that I wanted to make you aware of. Whenever you deploy a system to Windows Azure Virtual Machines, you must meet certain password complexity requirements. However, when you build the machine locally and SYSPREP it, you might not choose a strong password for the account you use to Remote Desktop to the machine. In that case, you might not be able to reach the system after you deploy it. Once again, the key here is reading through the instructions before you start. Check out the link I showed above, and this link: http://technet.microsoft.com/en-us/library/cc264456.aspx to make sure you understand what you want to deploy.  

    Read the article

  • Manager Self Service at your Fingertips

    - by Elaine Clement
    Last week we released new and improved Manager Self Service capabilities in PeopleSoft HCM 9.1. We delivered a new Manager Dashboard, streamlined many Manager Self Service transactions, provided new Pivot Grid capabilities, and implemented one-click Related Actions accessible from multiple places – all with the goal of improving every Manager’s self service experience. Manager Dashboard These new capabilities have the potential to significantly impact an organization’s bottom line, and here is why. Increased Efficiency The Manager Dashboard provides a ‘one-stop shop’ for your Managers with all of the key data they need consolidated into a single view. Alerts notifying managers of important tasks are immediately viewable and actionable. Administrators can configure the dashboard to include the most important pagelets needed for their organization, and Managers can personalize it to fit within their personal way of conducting their tasks. The Related Actions feature further improves the ease with which Managers get their work done by providing one-click access to Manager Self Service transactions.  Increased Job Satisfaction The streamlined Manager transactions, related actions, and the new Manager Dashboard provide an enhanced user experience. Managers are able to quickly get in, get the information they need, complete their transactions, and get out. Managers can spend their time focusing on getting the business results they need instead of their day to day HR tasks. Enhanced Decision Support Administrators can ensure the information and analytics they want their Managers to use are available from the Manager Dashboard, establishing best business practices. Additional pivot grids relevant to your own organization can be added to the Manager Dashboard. With this easy access to the relevant information in an easily understood format, Managers can make the right business decisions needed to improve their team and their team’s productivity. For more details on the Manager Dashboard and some of the other newly posted features, such as a new Talent Summary, check out this video and others: Oracle PeopleSoft Webcasts

    Read the article

  • Versioning and Continuous Integration with project settings files

    - by Michael Stephenson
    I came across something which was a bit of a pain in the bottom the other week. Our scenario was that we had implemented a helper style assembly which had some custom configuration implemented through the project settings. I'm sure most of you are familiar with this where you end up with a settings file which is viewable through the C# project file and you can configure some basic settings. The settings are embedded in the assembly during compilation to be part of a DefaultValue attribute. You have the ability to override the settings by adding information to your app.config and if the app.config doesn’t override the settings then the embedded default is used. All normal C# stuff so far… Where our pain started was when we implement Continuous Integration and we wanted to version all of this from our build. What I was finding was that the assembly was versioned fine but the embedded default value was maintaining the non CI build version number. I ended up getting this to work by using a build task to change the version numbers in the following files: App.config Settings.settings Settings.designer.cs I think I probably could have got away with just the settings.designer.cs, but wanted to keep them all consistent incase we had to look at the code on the build server for some reason. I think the reason this was painful was because the settings.designer.cs is only updated through Visual Studio and it writes out the code to this file including the DefaultValue attribute when the project is saved rather than as part of the compilation process. The compile just compiles the already existing C# file. As I said we got it working, and it was a bit of a pain. If anyone has a better solution for this I'd love to hear it

    Read the article

  • Ubuntu Server 12.04 as a router. Problem with DNS

    - by Lorenzo
    I have a virtualbox lab made up of 4 Windows 2008 R2 servers (DC/DNS,SQL,SHAREPOINT, EXCHANGE) that are configured with static ip addresses with NIC's attached to Internal network. Everything works. I had the requirement to execute some tests that also access external services available on the internet. To keep things clean and similar to the production environment I have installed another VM, with Ubuntu Server 12.04 64 bit and configured (I hope) to work as a router like described on this post. This VM has two network interfaces: first is Bridged with the host and is used as a WAN connection and the other one attached in the Internal Network with its own static IP address on the internal network subnet. But actually the Windows servers does not connect to the internet while the unix one connects. I did a route command. this is the result: Kernel IP Routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.69.121.1 0.0.0.0 UG 100 0 0 eth0 10.69.121.0 * 255.255.255.0 U 0 0 0 eth0 192.168.83.0 * 255.255.255.0 U 0 0 0 eth1 Can somebody help me with this configuration? :) Thanks! Addendum: I forgot to mention that one of the windows server hosts a DNS service for which I should maybe configure a forwarding server but I do not exactly know which server to forward on... :(

    Read the article

  • Restoring an Ubuntu Server using ZFS RAID-Z for data

    - by andybjackson
    Having become disillusioned with hacking Buffalo NAS devices, I've decided to roll my own Home server. After some research, I have settled on an HP Proliant Microserver with Ubuntu Server and ZFS (OS on 1 Ext4 disk, Data on 3 RAID-Z disks). As Joel Spolsky and Geoff Atwood say with regards to backup, I can't rest until I have done a restore in all of the failure scenarios that I am seeking to protect against. Q: How to configure Ubuntu Server to recognise a pre-existing RAID-Z array? Clearly if one of the data disks die - then that is a resilvering scenario, which is well documented. If two of the data disks die, then I am into regular backup/restore land. If the OS dies and I can restore, also an easy scenario. But if the OS dies and I can't restore, then I need to recreate an Ubuntu server. But how do I get this to recognise my RAID-Z array? Is the necessary configuration information stored within and across the RAID-Z array and simply need to be found (if so, how)? Or does it reside on the OS ext4 disk (in which case how do I recreate it)?

    Read the article

  • How to fix E: Internal Error, No file name for libc6

    - by Loren Ramly
    How to fix E: Internal Error, No file name for libc6, Like that will show If I do: $ sudo apt-get upgrade or $ sudo apt-get install package This is example : $ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: ginn hplip hplip-data libdrm-dev libdrm-intel1 libdrm-nouveau1a libdrm-radeon1 libdrm2 libgrip0 libhpmud0 libkms1 libsane-hpaio libunity-2d-private0 libunity-core-5.0-5 linux-generic-pae linux-headers-generic-pae linux-image-generic-pae printer-driver-hpcups printer-driver-hpijs unity unity-2d-common unity-2d-panel unity-2d-shell unity-2d-spread unity-common unity-services The following packages will be upgraded: alsa-base firefox firefox-globalmenu firefox-gnome-support firefox-locale-en icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-7-jre-jamvm libdbus-glib-1-2 libdbus-glib-1-dev libgnutls-dev libgnutls-openssl27 libgnutls26 libgnutlsxx27 libssl-dev libssl-doc libssl1.0.0 linux-sound-base openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib openjdk-7-jdk openjdk-7-jre openjdk-7-jre-headless openjdk-7-jre-lib openssl sudo 27 upgraded, 0 newly installed, 0 to remove and 26 not upgraded. 3 not fully installed or removed. Need to get 0 B/126 MB of archives. After this operation, 3,072 B of additional disk space will be used. Do you want to continue [Y/n]? y E: Internal Error, No file name for libc6 I have follow instruction from here E: Internal Error, No file name for libssl1.0.0 . Which do: sudo apt-get update sudo apt-get clean sudo apt-get install -fy sudo dpkg -i /var/cache/apt/archives/*.deb sudo dpkg --configure -a sudo apt-get install -fy sudo apt-get dist-upgrade But stuck with same error E: Internal Error, No file name for libc6 when do command sudo apt-get install -fy. And I've been looking on google, but have not been successful until now. Thanks.

    Read the article

  • correct Installation and configuration of openJDK and R

    - by Marco K
    I am relatively new to Ubuntu so I wont know a lot of commands that probably became standard to a lot of you guys. I am trying to set up R and with it the necessary java dependencies to install e.g. JGR, rjava, etc. I read through quite a few instructions to do that but somehow I must have done sth wrong. Here is the state of R and java: R --version R version 2.14.1 (2011-12-22) Copyright (C) 2011 The R Foundation for Statistical Computing ISBN 3-900051-07-0 Platform: x86_64-pc-linux-gnu (64-bit) java -version java version "1.6.0_23" OpenJDK Runtime Environment (IcedTea6 1.11pre) (6b23~pre11-0ubuntu1.11.10.1) OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode) R CMD javareconf Java interpreter : /usr/bin/java Java version : 1.6.0_23 Java home path : /usr/lib/jvm/java-6-openjdk/jre Java compiler : /usr/bin/javac Java headers gen.: /usr/bin/javah Java archive tool: /usr/bin/jar Java library path: /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-6-openjdk/jre/lib/amd64:/usr/lib/jvm/java-6-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib/jni:/lib:/usr/lib JNI linker flags : -L/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server -L/usr/lib/jvm/java-6-openjdk/jre/lib/amd64 -L/usr/lib/jvm/java-6-openjdk/jre/../lib/amd64 -L/usr/java/packages/lib/amd64 -L/usr/lib/jni -L/lib -L/usr/lib -ljvm JNI cpp flags : But when I try to install 'JavaGD' in R, which is a dependency for JGR I get: ... checking Java support in R... present: interpreter : '/usr/bin/java' cpp flags : '' java libs : '-L/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server -L/usr/lib/jvm/java-6-openjdk/jre/lib/amd64 -L/usr/lib/jvm/java-6-openjdk/jre/../lib/amd64 -L/usr/java/packages/lib/amd64 -L/usr/lib/jni -L/lib -L/usr/lib -ljvm' configure: error: One or more Java configuration variables are not set. Make sure R is configured with full Java support (including JDK). Run R CMD javareconf as root to add Java support to R. ... Any help would be greatly appreciated. Thanks!

    Read the article

  • Would Using a PHP Framework Be Beneficial in My Context?

    - by Fractal
    I've just started work at a small start-up company who mainly uses PHP to develop their front-end apps. I had no prior PHP experience before joining, and this has led to my apps becoming large pieces of spaghetti code. I essentially started by adding code to implement an initial feature, and then continued to hack in more code to implement further features – without much thought for the overall design. The apps themselves output XML to render on small mobile devices. I recently started looking into frameworks that I could use. I reckon an advantage would be that they seem to force developers to modularise their programs using good-practice design patterns. This seems great for someone in my position. The extra functions they provide, for example: interfacing with databases in such a way as to make SQL injection impossible, would be very useful too. The downside I can see is that there will be a lot of overhead for me in terms of the time taken to learn the framework itself (while still getting to grips with PHP itself). I'm also worried that it will be overkill for the scale of the apps we develop. They tend to be programs that interface with a fairly simple back-end DB, and will generate about 5 different XML screens. Probably around 1 or 2 thousand lines of code. The time it takes just to configure the frameworks may not be worth it. The final problem I can see is that developers in the company – who have to go over my code, and who do not know the PHP framework I may use – will have a much harder time understanding it. Given those pros and cons, I'm still not sure on what the best course of action will be; so any advice will be greatly appreciated.

    Read the article

  • How can I set external monitor as default?

    - by iJeeves
    I have connected an external monitor to my laptop through HDMI. Currently either my Desktop is getting extended to the external monitor (with native resolution) or low resolution on both when I choose "Same Image in both". How can I ensure that the external monitor is used by default and the laptop monitor just blanks. I generated the xorg.conf file by doing: X -configure The following is the content of xorg.conf.new file generated in my user folder. Should I copy this anywhere? Should I edit the contents? Section "ServerLayout" Identifier "X.org Configured" Screen 0 "Screen0" 0 0 InputDevice "Mouse0" "CorePointer" InputDevice "Keyboard0" "CoreKeyboard" EndSection Section "Files" ModulePath "/usr/lib/xorg/modules" FontPath "/usr/share/fonts/X11/misc" FontPath "/usr/share/fonts/X11/cyrillic" FontPath "/usr/share/fonts/X11/100dpi/:unscaled" FontPath "/usr/share/fonts/X11/75dpi/:unscaled" FontPath "/usr/share/fonts/X11/Type1" FontPath "/usr/share/fonts/X11/100dpi" FontPath "/usr/share/fonts/X11/75dpi" FontPath "/var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType" FontPath "built-ins" EndSection Section "Module" Load "glx" Load "dri2" Load "record" Load "extmod" Load "dbe" Load "dri" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "ZAxisMapping" "4 5 6 7" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Monitor Vendor" ModelName "Monitor Model" EndSection Section "Device" ### Available Driver options are:- ### Values: : integer, : float, : "True"/"False", ### : "String", : " Hz/kHz/MHz", ### : "%" ### [arg]: arg optional #Option "NoAccel" # [] #Option "SWcursor" # [] #Option "ColorKey" # #Option "CacheLines" # #Option "Dac6Bit" # [] #Option "DRI" # [] #Option "NoDDC" # [] #Option "ShowCache" # [] #Option "XvMCSurfaces" # #Option "PageFlip" # [] Identifier "Card0" Driver "intel" BusID "PCI:0:2:0" EndSection Section "Screen" Identifier "Screen0" Device "Card0" Monitor "Monitor0" SubSection "Display" Viewport 0 0 Depth 1 EndSubSection SubSection "Display" Viewport 0 0 Depth 4 EndSubSection SubSection "Display" Viewport 0 0 Depth 8 EndSubSection SubSection "Display" Viewport 0 0 Depth 15 EndSubSection SubSection "Display" Viewport 0 0 Depth 16 EndSubSection SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

  • SharePoint 2010 and Windows Server Backup

    - by Enrique Lima
    A couple of months ago, a friend found a bit of information on TechNet that has proven to be quite useful. See, I am of the opinion SharePoint allows for smaller deployments to be made, and with that said, I am talking about SharePoint Foundation 2010 being used for the most part. But truly the point here is not to discuss whether or not a deployment of SharePoint Foundation 2010 or SharePoint Server 2010 is right or not.  The fact is they do take place and happen.  And information will reside there. Now, the point of this post is to raise awareness on options available for companies that have implemented it and maybe are a bit “iffy” on how to protect the information being placed in libraries and lists.  In many cases I have found SharePoint comes first and business continuity becomes an afterthought.  The documentation piece from TechNet states: “You can register SharePoint Server 2010 with Windows Server Backup by using the stsadm.exe -o -registerwsswriter operation to configure the Volume Shadow Copy Service (VSS) writer for SharePoint Server. Windows Server Backup then includes SharePoint Server 2010 in server-wide backups. When you restore from a Windows Server backup, you can select Microsoft SharePoint Foundation (no matter which version of SharePoint 2010 Products is installed), and all components reported by the VSS writer forSharePoint Server 2010 on that server at the time of the backup will be restored. Windows Server Backup is recommended only for use with for single-server deployments.” Even in the event of single-server deployments you will have options to safeguard your data. The process will require that after you have executed the stsadm command above, you will then use Windows Server Backup to do a Full Server Backup.  Then when the restore operation is needed you will be able to select specifically the section that has the SharePoint technologies backup. The restore process: Hope you find this to be a helpful post.  I have found this to be specially handy in SharePoint deployments that are part of a Team Foundation Server deployment and that are isolated from any other SharePoint farm and such.   Credits:  Sean McDonough for passing along the information available on TechNet.

    Read the article

  • OBIEE 11.1.1 - How to enable HTTP compression and caching in Oracle iPlanet Web Server

    - by Ahmed Awan
    1. To implement HTTP compression / caching, install and configure Oracle iPlanet Web Server 7.0.x for the bi_serverN Managed Servers (refer to document http://docs.oracle.com/cd/E23943_01/web.1111/e16435/iplanet.htm) 2. On the Oracle iPlanet Web Server machine, open the file Administrator's Configuration (obj.conf) for editing. (Guidelines for modifying the obj.conf file is available at http://download.oracle.com/docs/cd/E19146-01/821-1827/821-1827.pdf) 3. Add the following lines in obj.conf file inside <Object name="default"> . </Object> and restart the Oracle iPlanet Web Server machine: #HTTP Caching <If $path =~ '^(.*)\.(jpg|jpeg|gif|png|css|js)$'> ObjectType fn="set-variable" insert-srvhdrs="Expires:$(httpdate($time + 864000))" </If>   <If $path =~ '^(.*)\.(jpg|jpeg|gif|png|css|js)$'> PathCheck fn="set-cache-control" control="public,max-age=864000" </If>   #HTTP Compression   Output fn="insert-filter" filter="http-compression" vary="false" compression-level="9" fragment_size="8096"

    Read the article

  • Spotlight on an ACE: Edwin Biemond

    - by jeckels
    Edwin Biemond is an active member of the ACE community, having worked with Oracle's development tooling and database technologies since 1997. Since then, Edwin has become an expert in many of Oracle's middleware technologies as well, including WebLogic and SOA. In fact, Edwin has become so prolfic that he was named the Java Developer of the Year in 2009. Edwin hails from the Netherlands, where he is an architect at the company Amis, and is also a co-author of the OSB Development Cookbook. He's a proven expert in ADF, JSF, messaging (Edifact / ebXML), Enterprise Service Bus, web services and tuning of application servers and databases. Recently, Edwin posted a blog on the road map of WebLogic 12c, going over salient features and what the future looks like for Fusion Middleware and the Application Server areas - it's well worth a read, so give it a look. A snippet: WebLogic 12.1.3 will be the first version for many FMW 12c products like Oracle SOA Suite 12c and probably come in one big jar. 12.1.3 & 12.1.4 will add extra features and improvements to Elastic JMS & Dynamic Clusters. Elastic JMS in 12.1.3 will support Server Migration so you can’t lose any JMS messages. In 12.1.4, Dynamic Clusters will have support for auto-scaling based on thresholds based on user-defined metrics. WebLogic 12.1.4 will also have an API to control the Dynamic Clusters, this way we can easily program when to stop, start or remove nodes from a dynamic cluster. Further, Edwin is hosting a session on getting your FMW environment up and running in less than 10 minutes using popular tooling to configure and manage the many FMW components you have in your technology stack. Register now for this virtual developer day to see more. We thank Edwin for his commitment to being an ACE, his work on his blog, his social media publishing and his overall commitment to helping other technologists be even more successful with Oracle products. Follow Edwin on his blog, Twitter, Facebook, LinkedIn, or read his ACE Profile

    Read the article

  • Error when running debuild on package source

    - by Chris Wilson
    I'm attempting to build the squeak-vm source but am getting an error every time I do so. The output is: dpkg-buildpackage -rfakeroot -D -us -uc dpkg-buildpackage: export CFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export CPPFLAGS from dpkg-buildflags (origin: vendor): dpkg-buildpackage: export CXXFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export FFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export LDFLAGS from dpkg-buildflags (origin: vendor): -Wl,-Bsymbolic-functions dpkg-buildpackage: source package squeak-vm dpkg-buildpackage: source version 1:4.0.3.2202-2 dpkg-buildpackage: source changed by José L. Redrejo Rodríguez <[email protected]> dpkg-source --before-build squeak-vm-4.0.3.2202 dpkg-buildpackage: host architecture i386 fakeroot debian/rules clean dh_testdir dh_testroot rm -f build-stamp configure-stamp rm -f unix/cmake/config.sub unix/cmake/config.guess /usr/bin/make -f debian/rules unpatch make[1]: Entering directory `/home/notgary/Projects/squeak/squeak-vm-4.0.3.2202' QUILT_PATCHES=debian/patches \ quilt --quiltrc /dev/null pop -a -R || test $? = 2 Patch linex.patch does not remove cleanly (refresh it or enforce with -f) make[1]: *** [unpatch] Error 1 make[1]: Leaving directory `/home/notgary/Projects/squeak/squeak-vm-4.0.3.2202' make: *** [clean] Error 2 dpkg-buildpackage: error: fakeroot debian/rules clean gave error exit status 2 debuild: fatal error at line 1337: dpkg-buildpackage -rfakeroot -D -us -uc failed

    Read the article

  • links for 2011-02-14

    - by Bob Rhubart
    Glenn Fawcett: Solaris Eye for the Linux Guy, or how I learned to stop worrying about Linux and Love Solaris (Part 1) Glenn says: "This entry goes out to my Oracle techie friends that have been in the Linux camp for sometime now and are suddenly finding themselves needing to know more about Solaris… hmmmm… I wonder if this has anything to do with Solaris now being an available option with Exadata?"  (tags: linux solaris oracle) Enterprise Software Development with Java: High Performance JPA with GlassFish and Coherence - Part 2 Oracle ACE Director Markus Eisele describes "the steps you have to take to configure a JPA backed Cache with Coherence and how you could use it from within GlassFish as a high performance data store." (tags: oracle otn oracleace java glassfish coherence) TOGAF a Registered Trademark and Surpasses 15k Certifications EA Blogs Mike Walker relays news on the TOGAF standard. (tags: entarch togaf) Weblogic or wait? | Capping IT Off | Capgemini "So when would you move over to the new Oracle Technology?" asks Arjan Kramer. " Well, as always there can be several reasons..." (tags: oracle capgemini weblogic) Random Monday Thoughs (Art of SOA Governance) "Governance is what insurance is to new cars, be it to SOA, IT transformations and software development. Governance is a insurance policy against risk of failure." - Terry Goldman (tags: oracle otn soa soagovernance)

    Read the article

  • Unhelpful Help

    - by Geoff N. Hiten
    Up until SQL 2012, I recommended installing Books On-Line (BOL) anywhere you installed SQL Server.  It made looking up reference information simpler, especially when you were on a server that didn’t have direct Internet access.  That all changed today.  I started the new Help Viewer with a local copy of BOL.  I actually found what I was looking for and closed the app.  Or so I thought.  Then I noticed something.  A little parasite had attached itself to my system.      Yep, the “Help” system left an “agent” behind.  Now I shouldn’t have to tell you that running application helper agents on server platforms is a bad idea.  And it gets worse.  There is no way to configure the app so that it does NOT start the parasite agent each time you restart help.  So the solution becomes do not install help on production server platforms.  Which is pretty unhelpful.

    Read the article

  • Beginners Guide to Client Application Services

    - by mbcrump
    What is it? Client application services make it easy for you to create Windows-based applications that use the ASP.NET AJAX login, roles, and profile application services included in the Microsoft ASP.NET 2.0 AJAX Extensions. These services enable multiple Web and Windows-based applications to share user information and user-management functionality from a single server.   What can you do with it? Authenticate a user. You can use the authentication service to verify a user's identity. Determine the role or roles of an authenticated user. You can use the roles service to change the user interface of your application depending on the user's role. For example, you can provide additional features for users who are in an administrator role. Store and access per-user application settings located on the server. You can use the Web settings service (also known as the profile service) to share settings across multiple applications and locations. Client application services take advantage of the Web services extensibility model through client service providers that you can specify in your application configuration files. These service providers include offline functionality that uses a local cache for authentication, roles, and settings data when a network connection is unavailable. Give me an example of where I would use this! Sharing login and user role information between a Windows Form application and a ASP.NET application. How do I configure it? Click Here

    Read the article

  • links for 2011-03-09

    - by Bob Rhubart
    Is there a Telecommunications Reference Architecture? (Telecommunications Architecture Corner) The answer is "yes," and Raul Goycoolea shares the details. (tags: oracle otn enterprisearchitecture) Oracle@info360: Advance Beyond Point Solutions To An Enterprise Content Strategy (Oracle Enterprise 2.0 Blog) Kellsey Ruppel shares information on some of the speakers at the upcoming info360/AIIM conference. (tags: oracle otn enterprise2.0 aiim info360) ERP in the Cloud for Local Government | Oracle Blog | Capgemini | Consulting, Technology, Outsourcing In these times of austerity, Local Authorities are facing significant reductions in budgets (on average over 30%). Now that the easier savings have been realised, Councils are faced with two options, cutting services or revolutionary changes to the way they do things today. (tags: oracle capgemini cloud) Mobile HR Apps "Good, so we have we have plenty of commercial applications making use of the smart phone," says Raheel Khan. "But what about core backend business applications?" (tags: oracle mobilecomputing) Policy Administration is the Top 2011 IT Priority for Insurers (Oracle Insurance) "Insurers can no longer rely on inflexible policy administration systems that impede their ability to rapidly configure and bring to innovative new products, add riders, support changing business processes and take advantage of market opportunities." - Helen Pitts (tags: oracle otn enterprisearchitecture) Free: Oracle Technology Network Architect Day - Denver - March 23 The live one-day event in Denver brings together architects from a broad range of disciplines and domains to share insights and expertise in the use of Oracle technologies to meet the challenges today’s architects regularly face. The event is free, but seating is limited. (tags: oracle otn enterprisearchitecture cloud optimization) InfoQ: Randy Shoup on Evolvable Systems Randy Shoup discusses evolvable systems: how to run different versions of a system in parallel during migrations, decoupling a system with events, schemas at eBay and much more. (tags: ping.fm)

    Read the article

  • How do I install Red5 using apt-get? Getting sub-process error

    - by Dalen
    This is copy from question of some guy on other forum that never got satisfiably answered. I encountered the same error few days ago on Ubuntu 13.04 Desktop. It seems like Red5 is installed but it cannot be run for some reason. Can anyone explain what is going on here? Why should dpkg fail? I mean, this is checked repo, it should work fine. apt-get install red5-server Selecting previously deselected package red5-server. (Reading database ... 53491 files and directories currently installed.) Unpacking red5-server (from .../red5-server_0.9.1-4squeeze1_all.deb) ... Setting up red5-server (0.9.1-4squeeze1) ... Starting Flash streaming server : red5-server failed! invoke-rc.d: initscript red5-server, action "start" failed. dpkg: error processing red5-server (--configure): subprocess installed post-installation script returned error exit status 1 configured to not write apport reports Errors were encountered while processing: red5-server E: Sub-process /usr/bin/dpkg returned an error code (1) Logfile error.log in /usr/share/red5/log was completely empty. Other logs were not but according to them, there were no problems at all.

    Read the article

  • How do I stop ubuntu from detaching minimize/maximuze/close buttons?

    - by Shahbaz
    Some time ago I managed to get ubuntu to keep the window menubars in the menu rather than the bar above (I'm not sure if this part is unity or compiz, or what's the difference). That was by removing indicator-appmenu Anyway, so now everything is fine except one thing: If I have a window that is full screen, the minimize/maximize/close buttons are still grabbed by the bar on the top. Usually this doesn't cause a problem because the upper-left corner of the full screen window and the whole screen are not too far apart. However, one thing happens to me a lot, and that is I am working on something (programming), then I need to check some things from other places so I open some windows, see what I want and switch back to my work. Those windows however are temporary so at some point I want to close them. Now here's what happens: I have the focus on some window and I can't close the maximized window behind it unless I click on the window first, so that the buttons appear and then close it. I couldn't find anything on the internet about this. Is this something that's hardcoded in unity/compiz/whatever or is there actually a way to configure this?

    Read the article

  • Cannot set monitor to native resolution

    - by S B
    problem is similar to so many other users, but solutions found do not work. Background: Fresh install of 12.04 (completely updated) on a Fit-PC2 (specs). Read in several places that the new 3.X kernel that 12.04 runs on has a new psb_gfx driver which supports the gma500 graphics card (poulsbo chipset). All's pretty much working (there are some glitches which are documented, so I won't raise them here), except for the screen resolution. My native monitor resolution is 1920X1080, but all I get is 1024x768. Output running xrandr: xrandr: Failed to get size of gamma for output default Screen 0: minimum 1024 x 768, current 1024 x 768, maximum 1024 x 768 default connected 1024x768+0+0 0mm x 0mm 1024x768 0.0* Although I read that Ubuntu does not come with an xorg.conf file anymore, I also tried running sudo X :1 -configure, and here's the end of the output: Number of created screens does not match number of detected devices. Configuration failed. When I look in the xorg.conf.new file created in my home directory, it seems that for some reason X thinks I have two screens. Don't know what to do with that. Ideas anyone? Thanks for your time.

    Read the article

  • Why is the root partition on my disk full?

    - by Agmenor
    I installed Ubuntu 12.04 by doing a fresh install where there was previously Ubuntu 11.10. My computer warns me now that my disk is nearly full. After having run apt-get purge, run apt-get autoremove and emptied the Trash can, I still have this problem as shown by this screenshot of Gparted: The disk /dev/sda7 is indeed full. I ran the Disk Usage Analyzer (Baobab) and I am still not sure of what is happening: One of my hypothesis is that when installing Ubuntu 12.04, I didn't configure my disks well and the disk /dev/sda6 is not mounted well as /home. Is this the reason indeed? What should I do to verify this and then to get the things fixed? Here are a few additional details to answer the questions I received (thank you everybody): My home directory is not encrypted. The Backup utility (Déjà Dup) is not set for automatic backups. (I do it myself and manually.) After I mount /dev/sda6, the command df -h gives Filesystem Size Used Avail Use% Mounted on /dev/sda7 244G 221G 12G 96% / udev 3,9G 4,0K 3,9G 1% /dev tmpfs 1,6G 904K 1,6G 1% /run none 5,0M 0 5,0M 0% /run/lock none 3,9G 164K 3,9G 1% /run/shm /dev/sda6 653G 189G 433G 31% /media/8ec2fa69-039b-4c52-ab1b-034d785132a1 (sorry but formatting this into code does not work, for an unknown reason) Thanks to izx's post, I realized /dev/sda6 was not even mounted before. It contains all the documents I used to have when I was running Ubuntu 11.10.

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >