Search Results

Search found 54494 results on 2180 pages for 'system management'.

Page 344/2180 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • Accessing JMX for Oracle WebLogic 11g

    - by Anthony Shorten
    In Oracle Utilities Application Framework V4, we use the latest Oracle WebLogic release (11g). The instructions below illustrate a way of allowing a console like jconsole to remotely monitor and manage Oracle WebLogic using the JMX Mbeans. Typically management of Oracle WebLogic is done from Oracle Enterprise Manager or the Oracle Weblogic console application but you can also use JMX. To access the JMX capability for Oracle WebLogic 11g, for an Oracle Utilities Application Framework based product, using a JMX console (such as jconsole) the following process needs to be performed: Enable the JMX Management Server in the Oracle WebLogic console at splapp - Configuration - General - Advanced Settings option. Enable both Compatibility Mbean Server Enabled and Management EJB Enabled (this enables the legacy and new JMX interface). Save the changes This change will require a restart. In the startup of the Oracle WebLogic server in the $SPLSYSTEMLOGS/myserver.log (or %SPLESYSTEMLOGS%\myserver.log on Windows) you will see the BEA-149512 message indicating the Mbean servers have been started. The message will indicate the JMX URL that can be used to access the JMX Mbeans. The URL is in the format: service:jmx:iiop://host:port/jndi/mbeanserver where: host - Oracle WebLogic host name port - Oracle WebLogic port number mbeanserver - Mbean Server to access. Valid Values: weblogic.management.mbeanservers.runtime weblogic.management.mbeanservers.edit weblogic.management.mbeanservers.domainruntime For illustrative purposes we will use the domainruntime Mbean. Ensure that you execute the splenviron[.sh] utility to set the appropriate environment variables for the desired environment. Execute the following jconsole command to initiate the connection to the JMX Mbean server Windows: jconsole -J-Djava.class.path=%JAVA_HOME%\lib\jconsole.jar;%WL_HOME%\server\lib\wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote Linux/Unix jconsole -J-Djava.class.path=$JAVA_HOME/lib/jconsole.jar;$WL_HOME/server/lib/wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote You will see a New Connection Dialog. Specify the URL from the previous steps into the Remote process (i.,e. service:jmx:iiop...). The credentials are the credentials specified for the Oracle WebLogic console. You are now able to view the JMX classes available. Here is an example from my demonstration machine: Refer to the Oracle WebLogic Mbean documentation to understand the output.

    Read the article

  • "Loading operating system... boot error" when booting from live CD

    - by jeremy
    I'm having a problem installing Ubuntu 12.10 on a new drive. I was running Windows7 on my SSD but when the drive crashed, I decided to use that as an excuse to make the switch to Ubuntu. I've been experimenting with it on my old laptop until I got my SSD replaced under warranty. Now I have my SSD back and want to install Ubuntu on my desktop machine. I used UNetbootin to make a bootable flash drive. I then I went into my BIOS and made sure USB loaded before the hard drive. However, when I try to load it I get an error that says: Loading operating system ... boot error I know the flash drive works because if I reboot my laptop or my other Windows PC with the flash drive and it loads into Ubuntu...just when I try to do it in the PC with no OS currently on the drive.

    Read the article

  • Creating an HttpHandler to handle request of your own extension

    - by Jalpesh P. Vadgama
    I have already posted about http handler in details before some time here. Now let’s create an http handler which will handle my custom extension. For that we need to create a http handlers class which will implement Ihttphandler. As we are implementing IHttpHandler we need to implement one method called process request and another one is isReusable property. The process request function will handle all the request of my custom extension. so Here is the code for my http handler class. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; namespace Experiement { public class MyExtensionHandler:IHttpHandler { public MyExtensionHandler() { //Implement intialization here } bool IHttpHandler.IsReusable { get { return true; } } void IHttpHandler.ProcessRequest(HttpContext context) { string excuttablepath = context.Request.AppRelativeCurrentExecutionFilePath; if (excuttablepath.Contains("HelloWorld.dotnetjalps")) { Page page = new HelloWorld(); page.AppRelativeVirtualPath = context.Request.AppRelativeCurrentExecutionFilePath; page.ProcessRequest(context); } } } } Here in above code you can see that in process request function I am getting current executable path and then I am processing that page. Now Lets create a page with extension .dotnetjalps and then we will process this page with above created http handler. so let’s create it. It will create a page like following. Now let’s write some thing in page load Event like following. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; namespace Experiement { public partial class HelloWorld : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello World"); } } } Now we have to tell our web server that we want to process request from this .dotnetjalps extension through our custom http handler for that we need to add a tag in httphandler sections of web.config like following. <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> <httpHandlers> <add verb="*" path="*.dotnetjalps" type="Experiement.MyExtensionHandler,Experiement"/> </httpHandlers> </system.web> </configuration> That’s it now run that page into browser and it will execute like following in browser That’s you.. Isn’t it cool.. Stay tuned for more.. Happy programming.. Technorati Tags: HttpHandler,ASP.NET,Extension

    Read the article

  • What is the difference between String and string in C#

    - by SAMIR BHOGAYTA
    string : ------ The string type represents a sequence of zero or more Unicode characters. string is an alias for String in the .NET Framework. 'string' is the intrinsic C# datatype, and is an alias for the system provided type "System.String". The C# specification states that as a matter of style the keyword ('string') is preferred over the full system type name (System.String, or String). Although string is a reference type, the equality operators (== and !=) are defined to compare the values of string objects, not references. This makes testing for string equality more intuitive. For example: String : ------ A String object is called immutable (read-only) because its value cannot be modified once it has been created. Methods that appear to modify a String object actually return a new String object that contains the modification. If it is necessary to modify the actual contents of a string-like object Difference between string & String : ---------- ------- ------ - ------ the string is usually used for declaration while String is used for accessing static string methods we can use 'string' do declare fields, properties etc that use the predefined type 'string', since the C# specification tells me this is good style. we can use 'String' to use system-defined methods, such as String.Compare etc. They are originally defined on 'System.String', not 'string'. 'string' is just an alias in this case. we can also use 'String' or 'System.Int32' when communicating with other system, especially if they are CLR-compliant. I.e. - if I get data from elsewhere, I'd deserialize it into a System.Int32 rather than an 'int', if the origin by definition was something else than a C# system.

    Read the article

  • I tried to transplant the wireless_tools to Android4.0 system, but I have a problem with Java

    - by Chen Guoli
    My Linux system is Ubuntu Kylin, a new branch of Ubuntu spreading mainly in China. I have changed some files, such as wireless.22.h, ifrename.c and iwlib.h, in wireless_tools.29/ which is located in Android4.0 root directory. Then I followed these steps: $ cd ~/Android4.0 $ su $[key](change to root) root# source build/envsetup.sh root# cd ~/Android4.0/wireless_tools.2.9/ root# mm Then I got a message telling me that: Your version is: java version "1.7.0_21". The correct version is: Java SE 1.6. Then I did as How can I uninstall my current java and install sun java 1.6 The java was installed successfully, but when i tried mm again: Your version is: java version "1.6.0_27". The correct version is: Java SE 1.6. Then I tried https://source.android.com/source/initializing.html , but it didn't work. How can I install "java SE 1.6." correctly?

    Read the article

  • Odd MVC 4 Beta Razor Designer Issue

    - by Rick Strahl
    This post is a small cry for help along with an explanation of a problem that is hard to describe on twitter or even a connect bug and written in hopes somebody has seen this before and any ideas on what might cause this. Lots of helpful people had comments on Twitter for me, but they all assumed that the code doesn't run, which is not the case - it's a designer issue. A few days ago I started getting some odd problems in my MVC 4 designer for an app I've been working on for the past 2 weeks. Basically the MVC 4 Razor designer keeps popping me errors, about the call signature to various Html Helper methods being incorrect. It also complains about the ViewBag object and not supporting dynamic requesting to load assemblies into the project. Here's what the designer errors look like: You can see the red error underlines under the ViewBag and an Html Helper I plopped in at the top to demonstrate the behavior. Basically any HtmlHelper I'm accessing is showing the same errors. Note that the code *runs just fine* - it's just the designer that is complaining with Errors. What's odd about this is that *I think* this started only a few days ago and nothing consequential that I can think of has happened to the project or overall installations. These errors aren't critical since the code runs but pretty annoying especially if you're building and have .csHtml files open in Visual Studio mixing these fake errors with real compiler errors. What I've checked Looking at the errors it indeed looks like certain references are missing. I can't make sense of the Html Helpers error, but certainly the ViewBag dynamic error looks like System.Core or Microsoft.CSharp assemblies are missing. Rest assured they are there and the code DOES run just fine at runtime. This is a designer issue only. I went ahead and checked the namespaces that MVC has access to in Razor which lives in the Views folder's web.config file: /Views/web.config For good measure I added <system.web.webPages.razor> <host factoryType="System.Web.Mvc.MvcWebRazorHostFactory, System.Web.Mvc, <split for layout> Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <pages pageBaseType="System.Web.Mvc.WebViewPage"> <namespaces> <add namespace="System.Web.Mvc" /> <add namespace="System.Web.Mvc.Ajax" /> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing" /> <add namespace="System.Linq" /> <add namespace="System.Linq.Expressions" /> <add namespace="ClassifiedsBusiness" /> <add namespace="ClassifiedsWeb"/> <add namespace="Westwind.Utilities" /> <add namespace="Westwind.Web" /> <add namespace="Westwind.Web.Mvc" /> </namespaces> </pages> </system.web.webPages.razor> For good measure I added System.Linq and System.Linq.Expression on which some of the Html.xxxxFor() methods rely, but no luck. So, has anybody seen this before? Any ideas on what might be causing these issues only at design time rather, when the final compiled code runs just fine?© Rick Strahl, West Wind Technologies, 2005-2012Posted in Razor  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Project-Based ERP - The Evolution of Project Managemen

    Fred Studer speaks with Ray Wang, Principal Analyst at Forrester Research and Ted Kempf, Senior Director for Oracle's Project Management Solutions about trends in the project management market, where enterprise project management is heading in the next 2 - 3 years and highlights from Ray's new line of research on project management solutions.

    Read the article

  • How to move 100mb hidden system reserved partition on Windows Server 2008 R2 to other drive?

    - by Artyom Krivokrisenko
    Hello! I have a server with two 1.5TB hard drives. I was going to install a Windows Server 2008 R2 and create software RAID1 using Windows Disk Management Utility. I instaleld Windows, open this console and that is what I see: http://i.imgur.com/KoC9a.png Setup program created a System Reserved Partition at my second HDD. I don't understand now, how can I create RAID1, because space, which supposed to be used for copy of disk C, now is used for this hidden partition. So is there any way now to create correct RAID1? May it is possible to move this partition to the Disk 0, where I have plenty of free space? Unfortunately I can't reinstall Windows and apply other options at the disk management step of the installation, because installation image is not longer connected to the server and I have no physical access to server, only remote desktop.

    Read the article

  • ¿Quieres conocer el impacto del rollout de Oracle Transportation Management en Unilever Europa?

    - by user703855
    Unilever anuncia un nuevo proyecto logístico que le permitirá alcanzar en 2014 una reducción aproximada de 200 millones de km anuales en la distancia recorrida por sus camiones. Estos datos toman como referencia los niveles de tráfico de 2010 en Europa.   Jan Zijderveld, Presidente de Unilever Europa, afirma: “Este proyecto es un gran paso en nuestro plan de Sostenibilidad. La reducción del número de km que necesitan realizar nuestros camiones implicará una reducción muy significativa del impacto medioambiental de nuestra cadena de suministro. Pero los beneficios de negocio obtenidos a través de dicha iniciativa son igualmente importantes. Demuestran, una vez más, el caso de negocio implícito en la obtención de dicha sostenibilidad. No sólo reduciremos nuestras emisiones de carbono, sino que los ahorros obtenidos a largo plazo de dicha reducción nos ayudaran a ser más eficientes y efectivos en términos de coste. “ Puedes leer la nota de prensa completa de Unilever aquí

    Read the article

  • SQL Server devs–what source control system do you use, if any? (answer and maybe win free stuff)

    - by jamiet
    Recently I noticed a tweet from notable SQL Server author and community dude-at-large Steve Jones in which he asked how many SQL Server developers were putting their SQL Server source code (i.e. DDL) under source control (I’m paraphrasing because I can’t remember the exact tweet and Twitter’s search functionality is useless). The question surprised me slightly as I thought a more pertinent question would be “how many SQL Server developers are not using source control?” because I have been doing just that for many years now and I simply assumed that use of source control is a given in this day and age. Then I started thinking about it. “Perhaps I’m wrong” I pondered, “perhaps the SQL Server folks that do use source control in their day-to-day jobs are in the minority”. So, dear reader, I’m interested to know a little bit more about your use of source control. Are you putting your SQL Server code into a source control system? If so, what source control server software (e.g. TFS, Git, SVN, Mercurial, SourceSafe, Perforce) are you using? What source control client software are you using (e.g. TFS Team Explorer, Tortoise, Red Gate SQL Source Control, Red Gate SQL Connect, Git Bash, etc…)? Why did you make those particular software choices? Any interesting anecdotes to share in regard to your use of source control and SQL Server? To encourage you to contribute I have five pairs of licenses for Red Gate SQL Source Control and Red Gate SQL Connect to give away to what I consider to be the five best replies (“best” is totally subjective of course but this is my blog so my decision is final ), if you want to be considered don’t forget to leave contact details; email address, Twitter handle or similar will do. To start you off and to perhaps get the brain cells whirring, here are my answers to the questions above: Are you putting your SQL Server code into a source control system? As I think I’ve already said…yes. Always. If so, what source control server software (e.g. TFS, Git, SVN, Mercurial, SourceSafe, Perforce) are you using? I move around a lot between many clients so it changes on a fairly regular basis; my current client uses Team Foundation Server (aka TFS) and as part of a separate project is trialing the use of Team Foundation Service. I have used SVN extensively in the past which I am a fan of (I generally prefer it to TFS) and am trying to get my head around Git by using it for ObjectStorageHelper. What source control client software are you using (e.g. TFS Team Explorer, Tortoise, Red Gate SQL Source Control, Red Gate SQL Connect, Git Bash, etc…)? On my current project, Team Explorer. In the past I have used Tortoise to connect to SVN. Why did you make those particular software choices? I generally use whatever the client uses and given that I work with SQL Server I find that the majority of my clients use TFS, I guess simply because they are Microsoft development shops. Any interesting anecdotes to share in regard to your use of source control and SQL Server? Not an anecdote as such but I am going to share some frustrations about TFS. In many ways TFS is a great product because it integrates many separate functions (source control, work item tracking, build agents) into one whole and I’m firmly of the opinion that that is a good thing if for no reason other than being able to associate your check-ins with a work-item. However, like many people there are aspects to TFS source control that annoy me day-in, day-out. Chief among them has to be the fact that it uses a file’s read-only property to determine if a file should be checked-out or not and, if it determines that it should, it will happily do that check-out on your behalf without you even asking it to. I didn’t realise how ridiculous this was until I first used SVN about three years ago – with SVN you make any changes you wish and then use your source control client to determine which files have changed and thus be checked-in; the notion of “check-out” doesn’t even exist. That sounds like a small thing but you don’t realise how liberating it is until you actually start working that way. Hoping to hear some more anecdotes and opinions in the comments. Remember….free software is up for grabs! @jamiet 

    Read the article

  • Why my VPN doesn't work anymore?

    - by xx77aBs
    I have openvpn server running on debian lenny. There is only one client - and it is running Windows 7 64-bit. This has worked for few months without any problems. And now, let's say for the last 7 days, it doesn't work at all. I connect successfully from client to the server, but I can't access anything through VPN. I have set it up so that all internet traffic is routed through VPN, and now when I connect with the client, the client can't do anything on the net (open any webpage, ping google, anything ...). Can you help me to figure out what's wrong ? I don't know where to start. I've also tried to connect to another openvpn server (I've installed and configured openvpn on another server, and when I try to connect to it result is the same). So I think there's something wrong with client ... Here is my connection log: Wed Apr 04 21:35:59 2012 OpenVPN 2.3-alpha1 Win32-MSVC++ [SSL (OpenSSL)] [LZO2] [PF_INET6] [IPv6 payload 20110522-1 (2.2.0)] built on Feb 21 2012 Enter Management Password: Wed Apr 04 21:35:59 2012 MANAGEMENT: TCP Socket listening on [AF_INET]127.0.0.10:25340 Wed Apr 04 21:35:59 2012 Need hold release from management interface, waiting... Wed Apr 04 21:36:00 2012 MANAGEMENT: Client connected from [AF_INET]127.0.0.10:25340 Wed Apr 04 21:36:00 2012 MANAGEMENT: CMD 'state on' Wed Apr 04 21:36:00 2012 MANAGEMENT: CMD 'log all on' Wed Apr 04 21:36:00 2012 MANAGEMENT: CMD 'hold off' Wed Apr 04 21:36:00 2012 MANAGEMENT: CMD 'hold release' Wed Apr 04 21:36:00 2012 WARNING: No server certificate verification method has been enabled. See http://openvpn.net/howto.html#mitm for more info. Wed Apr 04 21:36:00 2012 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Wed Apr 04 21:36:00 2012 Socket Buffers: R=[8192->8192] S=[8192->8192] Wed Apr 04 21:36:00 2012 MANAGEMENT: >STATE:1333568160,RESOLVE,,, Wed Apr 04 21:36:00 2012 UDPv4 link local: [undef] Wed Apr 04 21:36:00 2012 UDPv4 link remote: [AF_INET]11.22.33.44:1234 Wed Apr 04 21:36:00 2012 MANAGEMENT: >STATE:1333568160,WAIT,,, Wed Apr 04 21:36:00 2012 MANAGEMENT: >STATE:1333568160,AUTH,,, Wed Apr 04 21:36:00 2012 TLS: Initial packet from [AF_INET]11.22.33.44:1234, sid=ee329574 f15e9e04 Wed Apr 04 21:36:00 2012 VERIFY OK: depth=1, C=US, ST=CA, L=SanFrancisco, O=Fort-Funston, CN=Fort-Funston CA, [email protected] Wed Apr 04 21:36:00 2012 VERIFY OK: depth=0, C=US, ST=CA, L=SanFrancisco, O=Fort-Funston, CN=server_key, [email protected] Wed Apr 04 21:36:01 2012 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Apr 04 21:36:01 2012 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Apr 04 21:36:01 2012 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Apr 04 21:36:01 2012 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Apr 04 21:36:01 2012 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Wed Apr 04 21:36:01 2012 [server_key] Peer Connection Initiated with [AF_INET]11.22.33.44:1234 Wed Apr 04 21:36:02 2012 MANAGEMENT: >STATE:1333568162,GET_CONFIG,,, Wed Apr 04 21:36:03 2012 SENT CONTROL [server_key]: 'PUSH_REQUEST' (status=1) Wed Apr 04 21:36:03 2012 PUSH: Received control message: 'PUSH_REPLY,redirect-gateway def1,route 172.16.100.1,topology net30,ping 10,ping-restart 120,ifconfig 172.16.100.6 172.16.100.5' Wed Apr 04 21:36:03 2012 OPTIONS IMPORT: timers and/or timeouts modified Wed Apr 04 21:36:03 2012 OPTIONS IMPORT: --ifconfig/up options modified Wed Apr 04 21:36:03 2012 OPTIONS IMPORT: route options modified Wed Apr 04 21:36:03 2012 ROUTE_GATEWAY 192.168.1.1/255.255.255.0 I=15 HWADDR=00:1f:1f:3f:61:55 Wed Apr 04 21:36:03 2012 do_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0 Wed Apr 04 21:36:03 2012 MANAGEMENT: >STATE:1333568163,ASSIGN_IP,,172.16.100.6, Wed Apr 04 21:36:03 2012 open_tun, tt->ipv6=0 Wed Apr 04 21:36:03 2012 TAP-WIN32 device [VPN] opened: \\.\Global\{E28FD52B-F6C3-4094-A36A-30CB02FAC7E8}.tap Wed Apr 04 21:36:03 2012 TAP-Win32 Driver Version 9.9 Wed Apr 04 21:36:03 2012 Notified TAP-Win32 driver to set a DHCP IP/netmask of 172.16.100.6/255.255.255.252 on interface {E28FD52B-F6C3-4094-A36A-30CB02FAC7E8} [DHCP-serv: 172.16.100.5, lease-time: 31536000] Wed Apr 04 21:36:03 2012 Successful ARP Flush on interface [31] {E28FD52B-F6C3-4094-A36A-30CB02FAC7E8} Wed Apr 04 21:36:08 2012 TEST ROUTES: 2/2 succeeded len=1 ret=1 a=0 u/d=up Wed Apr 04 21:36:08 2012 C:\Windows\system32\route.exe ADD 11.22.33.44 MASK 255.255.255.255 192.168.1.1 Wed Apr 04 21:36:08 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=25 and dwForwardType=4 Wed Apr 04 21:36:08 2012 Route addition via IPAPI succeeded [adaptive] Wed Apr 04 21:36:08 2012 C:\Windows\system32\route.exe ADD 0.0.0.0 MASK 128.0.0.0 172.16.100.5 Wed Apr 04 21:36:08 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=30 and dwForwardType=4 Wed Apr 04 21:36:08 2012 Route addition via IPAPI succeeded [adaptive] Wed Apr 04 21:36:08 2012 C:\Windows\system32\route.exe ADD 128.0.0.0 MASK 128.0.0.0 172.16.100.5 Wed Apr 04 21:36:08 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=30 and dwForwardType=4 Wed Apr 04 21:36:08 2012 Route addition via IPAPI succeeded [adaptive] Wed Apr 04 21:36:08 2012 MANAGEMENT: >STATE:1333568168,ADD_ROUTES,,, Wed Apr 04 21:36:08 2012 C:\Windows\system32\route.exe ADD 172.16.100.1 MASK 255.255.255.255 172.16.100.5 Wed Apr 04 21:36:08 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=30 and dwForwardType=4 Wed Apr 04 21:36:08 2012 Route addition via IPAPI succeeded [adaptive] Wed Apr 04 21:36:08 2012 Initialization Sequence Completed Wed Apr 04 21:36:08 2012 MANAGEMENT: >STATE:1333568168,CONNECTED,SUCCESS,172.16.100.6,11.22.33.44 Client's route table after connection with OpenVPN: IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.41 281 0.0.0.0 128.0.0.0 172.16.100.1 172.16.100.6 31 94.23.53.45 255.255.255.255 192.168.1.1 192.168.1.41 25 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 128.0.0.0 128.0.0.0 172.16.100.1 172.16.100.6 31 172.16.100.4 255.255.255.252 On-link 172.16.100.6 286 172.16.100.6 255.255.255.255 On-link 172.16.100.6 286 172.16.100.7 255.255.255.255 On-link 172.16.100.6 286 192.168.1.0 255.255.255.0 On-link 192.168.1.41 281 192.168.1.41 255.255.255.255 On-link 192.168.1.41 281 192.168.1.255 255.255.255.255 On-link 192.168.1.41 281 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.1.41 281 224.0.0.0 240.0.0.0 On-link 172.16.100.6 286 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.1.41 281 255.255.255.255 255.255.255.255 On-link 172.16.100.6 286 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 192.168.1.1 Default =========================================================================== IPv6 Route Table =========================================================================== Active Routes: If Metric Network Destination Gateway 13 58 ::/0 On-link 1 306 ::1/128 On-link 13 58 2001::/32 On-link 13 306 2001:0:5ef5:79fd:3cc3:6b9:ac7c:14db/128 On-link 15 281 fe80::/64 On-link 31 286 fe80::/64 On-link 13 306 fe80::/64 On-link 13 306 fe80::3cc3:6b9:ac7c:14db/128 On-link 31 286 fe80::7d72:9515:7213:35e3/128 On-link 15 281 fe80::9cec:ce3f:89de:a123/128 On-link 1 306 ff00::/8 On-link 13 306 ff00::/8 On-link 15 281 ff00::/8 On-link 31 286 ff00::/8 On-link =========================================================================== Persistent Routes: None

    Read the article

  • Is Linear Tape File System (LTFS) Best For Transportable Storage?

    - by rickramsey
    Those of us in tape storage engineering take a lot of pride in what we do, but understand that tape is the right answer to a storage problem only some of the time. And, unfortunately for a storage medium with such a long history, it has built up a few preconceived notions that are no longer valid. When I hear customers debate whether to implement tape vs. disk, one of the common strikes against tape is its perceived lack of usability. If you could go back a few generations of corporate acquisitions, you would discover that StorageTek engineers recognized this problem and started developing a solution where a tape drive could look just like a memory stick to a user. The goal was to not have to care about where files were on the cartridge, but to simply see the list of files that were on the tape, and click on them to open them up. Eventually, our friends in tape over at IBM built upon our work at StorageTek and Sun Microsystems and released the Linear Tape File System (LTFS) feature for the current LTO5 generation of tape drives as an open specification. LTFS is really a wonderful feature and we’re proud to have taken part in its beginnings and, as you’ll soon read, its future. Today we offer LTFS-Open Edition, which is free for you to use in your in Oracle Enterprise Linux 5.5 environment - not only on your LTO5 drives, but also on your Oracle StorageTek T10000C drives. You can download it free from Oracle and try it out. LTFS does exactly what its forefathers imagined. Now you can see immediately which files are on a cartridge. LTFS does this by splitting a cartridge into two partitions. The first holds all of the necessary metadata to create a directory structure for you to easily view the contents of the cartridge. The second partition holds all of the files themselves. When tape media is loaded onto a drive, a complete file system image is presented to the user. Adding files to a cartridge can be as simple as a drag-and-drop just as you do today on your laptop when transferring files from your hard drive to a thumb drive or with standard POSIX file operations. You may be thinking all of this sounds nice, but asking, “when will I actually use it?” As I mentioned at the beginning, tape is not the right solution all of the time. However, if you ever need to physically move data between locations, tape storage with LTFS should be your most cost-effective and reliable answer. I will give you a few use cases examples of when LTFS can be utilized. Media and Entertainment (M&E), Oil and Gas (O&G), and other industries have a strong need for their storage to be transportable. For example, an O&G company hunting for new oil deposits in remote locations takes very large underground seismic images which need to be shipped back to a central data center. M&E operations conduct similar activities when shooting video for productions. M&E companies also often transfers files to third-parties for editing and other activities. These companies have three highly flawed options for transporting data: electronic transfer, disk storage transport, or tape storage transport. The first option, electronic transfer, is impractical because of the expense of the bandwidth required to transfer multi-terabyte files reliably and efficiently. If there’s one place that has bandwidth, it’s your local post office so many companies revert to physically shipping storage media. Typically, M&E companies rely on transporting disk storage between sites even though it, too, is expensive. Tape storage should be the preferred format because as IDC points out, “Tape is more suitable for physical transportation of large amounts of data as it is less vulnerable to mechanical damage during transportation compared with disk" (See note 1, below). However, tape storage has not been used in the past because of the restrictions created by proprietary formats. A tape may only be readable if both the sender and receiver have the same proprietary application used to write the file. In addition, the workflows may be slowed by the need to read the entire tape cartridge during recall. LTFS solves both of these problems, clearing the way for tape to become the standard platform for transferring large files. LTFS is open and, as long as you’ve downloaded the free reader from our website or that of anyone in the LTO consortium, you can read the data. So if a movie studio ships a scene to a third-party partner to add, for example, sounds effects or a music score, it doesn’t have to care what technology the third-party has. If it’s written back to an LTFS-formatted tape cartridge, it can be read. Some tape vendors like to claim LTFS is a “standard,” but beauty is in the eye of the beholder. It’s a specification at this point, not a standard. That said, we’re already seeing application vendors create functionality to write in an LTFS format based on the specification. And it’s my belief that both customers and the tape storage industry will see the most benefit if we all follow the same path. As such, we have volunteered to lead the way in making LTFS a standard first with the Storage Network Industry Association (SNIA), and eventually through to standard bodies such as American National Standards Institute (ANSI). Expect to hear good news soon about our efforts. So, if storage transportability is one of your requirements, I recommend giving LTFS a look. It makes tape much more user-friendly and it’s free, which allows tape to maintain all of its cost advantages over disk! Note 1 - IDC Report. April, 2011. “IDC’s Archival Storage Solutions Taxonomy, 2011” - Brian Zents Website Newsletter Facebook Twitter

    Read the article

  • Oracle Solaris 11 ZFS Lab for Openworld 2012

    - by user12626122
    Preface This is the content from the Oracle Openworld 2012 ZFS lab. It was well attended - the feedback was that it was a little short - thats probably because in writing it I bacame very time-concious after the ASM/ACFS on Solaris extravaganza I ran last year which was almost too long for mortal man to finish in the 1 hour session. Enjoy. Table of Contents Exercise Z.1: ZFS Pools Exercise Z.2: ZFS File Systems Exercise Z.3: ZFS Compression Exercise Z.4: ZFS Deduplication Exercise Z.5: ZFS Encryption Exercise Z.6: Solaris 11 Shadow Migration Introduction This set of exercises is designed to briefly demonstrate new features in Solaris 11 ZFS file system: Deduplication, Encryption and Shadow Migration. Also included is the creation of zpools and zfs file systems - the basic building blocks of the technology, and also Compression which is the compliment of Deduplication. The exercises are just introductions - you are referred to the ZFS Adminstration Manual for further information. From Solaris 11 onward the online manual pages consist of zpool(1M) and zfs(1M) with further feature-specific information in zfs_allow(1M), zfs_encrypt(1M) and zfs_share(1M). The lab is easily carried out in a VirtualBox running Solaris 11 with 6 virtual 3 Gb disks to play with. Exercise Z.1: ZFS Pools Task: You have several disks to use for your new file system. Create a new zpool and a file system within it. Lab: You will check the status of existing zpools, create your own pool and expand it. Your Solaris 11 installation already has a root ZFS pool. It contains the root file system. Check this: root@solaris:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 15.9G 6.62G 9.25G 41% 1.00x ONLINE - root@solaris:~# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c3t0d0s0 ONLINE 0 0 0 errors: No known data errors Note the disk device the root pool is on - c3t0d0s0 Now you will create your own ZFS pool. First you will check what disks are available: root@solaris:~# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c3t0d0 <ATA-VBOX HARDDISK-1.0 cyl 2085 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@0,0 1. c3t2d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@2,0 2. c3t3d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@3,0 3. c3t4d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@4,0 4. c3t5d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@5,0 5. c3t6d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@6,0 6. c3t7d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@7,0 Specify disk (enter its number): Specify disk (enter its number): The root disk is numbered 0. The others are free for use. Try creating a simple pool and observe the error message: root@solaris:~# zpool create mypool c3t2d0 c3t3d0 'mypool' successfully created, but with no redundancy; failure of one device will cause loss of the pool So destroy that pool and create a mirrored pool instead: root@solaris:~# zpool destroy mypool root@solaris:~# zpool create mypool mirror c3t2d0 c3t3d0 root@solaris:~# zpool status mypool pool: mypool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 errors: No known data errors Back to topExercise Z.2: ZFS File Systems Task: You have to create file systems for later exercises. You can see that when a pool is created, a file system of the same name is created: root@solaris:~# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 86.5K 2.94G 31K /mypool Create your filesystems and mountpoints as follows: root@solaris:~# zfs create -o mountpoint=/data1 mypool/mydata1 The -o option sets the mount point and automatically creates the necessary directory. root@solaris:~# zfs list mypool/mydata1 NAME USED AVAIL REFER MOUNTPOINT mypool/mydata1 31K 2.94G 31K /data1 Back to top Exercise Z.3: ZFS Compression Task:Try out different forms of compression available in ZFS Lab:Create 2nd filesystem with compression, fill both file systems with the same data, observe results You can see from the zfs(1) manual page that there are several types of compression available to you, set with the property=value syntax: compression=on | off | lzjb | gzip | gzip-N | zle Controls the compression algorithm used for this dataset. The lzjb compression algorithm is optimized for performance while providing decent data compression. Setting compression to on uses the lzjb compression algorithm. The gzip compression algorithm uses the same compression as the gzip(1) command. You can specify the gzip level by using the value gzip-N where N is an integer from 1 (fastest) to 9 (best compression ratio). Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)). Create a second filesystem with compression turned on. Note how you set and get your values separately: root@solaris:~# zfs create -o mountpoint=/data2 mypool/mydata2 root@solaris:~# zfs set compression=gzip-9 mypool/mydata2 root@solaris:~# zfs get compression mypool/mydata1 NAME PROPERTY VALUE SOURCE mypool/mydata1 compression off default root@solaris:~# zfs get compression mypool/mydata2 NAME PROPERTY VALUE SOURCE mypool/mydata2 compression gzip-9 local Now you can copy the contents of /usr/lib into both your normal and compressing filesystem and observe the results. Don't forget the dot or period (".") in the find(1) command below: root@solaris:~# cd /usr/lib root@solaris:/usr/lib# find . -print | cpio -pdv /data1 root@solaris:/usr/lib# find . -print | cpio -pdv /data2 The copy into the compressing file system takes longer - as it has to perform the compression but the results show the effect: root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.35G 1.59G 31K /mypool mypool/mydata1 1.01G 1.59G 1.01G /data1 mypool/mydata2 341M 1.59G 341M /data2 Note that the available space in the pool is shared amongst the file systems. This behavior can be modified using quotas and reservations which are not covered in this lab but are covered extensively in the ZFS Administrators Guide. Back to top Exercise Z.4: ZFS Deduplication The deduplication property is used to remove redundant data from a ZFS file system. With the property enabled duplicate data blocks are removed synchronously. The result is that only unique data is stored and common componenents are shared. Task:See how to implement deduplication and its effects Lab: You will create a ZFS file system with deduplication turned on and see if it reduces the amount of physical storage needed when we again fill it with a copy of /usr/lib. root@solaris:/usr/lib# zfs destroy mypool/mydata2 root@solaris:/usr/lib# zfs set dedup=on mypool/mydata1 root@solaris:/usr/lib# rm -rf /data1/* root@solaris:/usr/lib# mkdir /data1/2nd-copy root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02M 2.94G 31K /mypool mypool/mydata1 43K 2.94G 43K /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1 2142768 blocks root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02G 1.99G 31K /mypool mypool/mydata1 1.01G 1.99G 1.01G /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1/2nd-copy 2142768 blocks root@solaris:/usr/lib#zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.99G 1.96G 31K /mypool mypool/mydata1 1.98G 1.96G 1.98G /data1 You could go on creating copies for quite a while...but you get the idea. Note that deduplication and compression can be combined: the compression acts on metadata. Deduplication works across file systems in a pool and there is a zpool-wide property dedupratio: root@solaris:/usr/lib# zpool get dedupratio mypool NAME PROPERTY VALUE SOURCE mypool dedupratio 4.30x - Deduplication can also be checked using "zpool list": root@solaris:/usr/lib# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT mypool 2.98G 1001M 2.01G 32% 4.30x ONLINE - rpool 15.9G 6.66G 9.21G 41% 1.00x ONLINE - Before moving on to the next topic, destroy that dataset and free up some space: root@solaris:~# zfs destroy mypool/mydata1 Back to top Exercise Z.5: ZFS Encryption Task: Encrypt sensitive data. Lab: Explore basic ZFS encryption. This lab only covers the basics of ZFS Encryption. In particular it does not cover various aspects of key management. Please see the ZFS Adminastrion Manual and the zfs_encrypt(1M) manual page for more detail on this functionality. Back to top root@solaris:~# zfs create -o encryption=on mypool/data2 Enter passphrase for 'mypool/data2': ******** Enter again: ******** root@solaris:~# Creation of a descendent dataset shows that encryption is inherited from the parent: root@solaris:~# zfs create mypool/data2/data3 root@solaris:~# zfs get -r encryption,keysource,keystatus,checksum mypool/data2 NAME PROPERTY VALUE SOURCE mypool/data2 encryption on local mypool/data2 keysource passphrase,prompt local mypool/data2 keystatus available - mypool/data2 checksum sha256-mac local mypool/data2/data3 encryption on inherited from mypool/data2 mypool/data2/data3 keysource passphrase,prompt inherited from mypool/data2 mypool/data2/data3 keystatus available - mypool/data2/data3 checksum sha256-mac inherited from mypool/data2 You will find the online manual page zfs_encrypt(1M) contains examples. In particular, if time permits during this lab session you may wish to explore the changing of a key using "zfs key -c mypool/data2". Exercise Z.6: Shadow Migration Shadow Migration allows you to migrate data from an old file system to a new file system while simultaneously allowing access and modification to the new file system during the process. You can use Shadow Migration to migrate a local or remote UFS or ZFS file system to a local file system. Task: You wish to migrate data from one file system (UFS, ZFS, VxFS) to ZFS while mainaining access to it. Lab: Create the infrastructure for shadow migration and transfer one file system into another. First create the file system you want to migrate root@solaris:~# zpool create oldstuff c3t4d0 root@solaris:~# zfs create oldstuff/forgotten Then populate it with some files: root@solaris:~# cd /var/adm root@solaris:/var/adm# find . -print | cpio -pdv /oldstuff/forgotten You need the shadow-migration package installed: root@solaris:~# pkg install shadow-migration Packages to install: 1 Create boot environment: No Create backup boot environment: No Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 14/14 0.2/0.2 PHASE ACTIONS Install Phase 39/39 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 You then enable the shadowd service: root@solaris:~# svcadm enable shadowd root@solaris:~# svcs shadowd STATE STIME FMRI online 7:16:09 svc:/system/filesystem/shadowd:default Set the filesystem to be migrated to read-only root@solaris:~# zfs set readonly=on oldstuff/forgotten Create a new zfs file system with the shadow property set to the file system to be migrated: root@solaris:~# zfs create -o shadow=file:///oldstuff/forgotten mypool/remembered Use the shadowstat(1M) command to see the progress of the migration: root@solaris:~# shadowstat EST BYTES BYTES ELAPSED DATASET XFRD LEFT ERRORS TIME mypool/remembered 92.5M - - 00:00:59 mypool/remembered 99.1M 302M - 00:01:09 mypool/remembered 109M 260M - 00:01:19 mypool/remembered 133M 304M - 00:01:29 mypool/remembered 149M 339M - 00:01:39 mypool/remembered 156M 86.4M - 00:01:49 mypool/remembered 156M 8E 29 (completed) Note that if you had created /mypool/remembered as encrypted, this would be the preferred method of encrypting existing data. Similarly for compressing or deduplicating existing data. The procedure for migrating a file system over NFS is similar - see the ZFS Administration manual. That concludes this lab session.

    Read the article

  • 11 ADF Mobile Apps in 30 Hours

    - by Shay Shmeltzer
    The Oracle ADF Mobile team took part in a special "hackathon" this weekend, where 11 teams of new college hires who joined Oracle lately spent 30 hours building enterprise mobile applications leveraging ADF Mobile. One important thing to note - none of the participants worked with Oracle ADF Mobile before! In fact 90% of them didn't develop with ADF previously. All they had is a 2 hour training session before the event - and that's all they needed. From that point on they were able to build great cross device mobile applications. So what did they build? Here are some examples: A mileage expense tracking system: An ad campaign analysis system An expense report entry system Bug tracking system with data analysis: Carpooling social system: College Hiring system with CV scanning: Shipment management system for Farmers: Project time entry system: For sale post-it system (with item location): Conference event experience system with conference map and twitter feed integration: It was great to see how fast developers were able to learn and leverage ADF Mobile - and how creative the teams were. Here they are in action: So how about you? What would you build next? What would be your first ADF Mobile application? Start today!

    Read the article

  • add presence for a remote user to a legacy telephone system?

    - by niko
    we have a small call center that uses an old nortel phone system with analog lines. one of our sales people works from home so her calls do not go through the phone system. this creates a problem at time as the receptionist does not know if she is on the phone or not. we can easily get around this by using instant messenger status but i wanted to ask if there is another way that we can do it so that calls can also be forwarded to her when she is not on the phone. i realize that we can do this with a voip system but we're not planning on upgrading to voip until next year. does anyone know if there is an inexpensive way to add this capability today?

    Read the article

  • Why Is Faulty Behaviour In The .NET Framework Not Fixed?

    - by Alois Kraus
    Here is the scenario: You have a Windows Form Application that calls a method via Invoke or BeginInvoke which throws exceptions. Now you want to find out where the error did occur and how the method has been called. Here is the output we do get when we call Begin/EndInvoke or simply Invoke The actual code that was executed was like this:         private void cInvoke_Click(object sender, EventArgs e)         {             InvokingFunction(CallMode.Invoke);         }            [MethodImpl(MethodImplOptions.NoInlining)]         void InvokingFunction(CallMode mode)         {             switch (mode)             {                 case CallMode.Invoke:                     this.Invoke(new MethodInvoker(GenerateError));   The faulting method is called GenerateError which does throw a NotImplementedException exception and wraps it in a NotSupportedException.           [MethodImpl(MethodImplOptions.NoInlining)]         void GenerateError()         {             F1();         }           private void F1()         {             try             {                 F2();             }             catch (Exception ex)             {                 throw new NotSupportedException("Outer Exception", ex);             }         }           private void F2()         {            throw new NotImplementedException("Inner Exception");         } It is clear that the method F2 and F1 did actually throw these exceptions but we do not see them in the call stack. If we directly call the InvokingFunction and catch and print the exception we can find out very easily how we did get into this situation. We see methods F1,F2,GenerateError and InvokingFunction directly in the stack trace and we see that actually two exceptions did occur. Here is for comparison what we get from Invoke/EndInvoke System.NotImplementedException: Inner Exception     StackTrace:    at System.Windows.Forms.Control.MarshaledInvoke(Control caller, Delegate method, Object[] args, Boolean synchronous)     at System.Windows.Forms.Control.Invoke(Delegate method, Object[] args)     at WindowsFormsApplication1.AppForm.InvokingFunction(CallMode mode)     at WindowsFormsApplication1.AppForm.cInvoke_Click(Object sender, EventArgs e)     at System.Windows.Forms.Control.OnClick(EventArgs e)     at System.Windows.Forms.Button.OnClick(EventArgs e) The exception message is kept but the stack starts running from our Invoke call and not from the faulting method F2. We have therefore no clue where this exception did occur! The stack starts running at the method MarshaledInvoke because the exception is rethrown with the throw catchedException which resets the stack trace. That is bad but things are even worse because if previously lets say 5 exceptions did occur .NET will return only the first (innermost) exception. That does mean that we do not only loose the original call stack but all other exceptions and all data contained therein as well. It is a pity that MS does know about this and simply closes this issue as not important. Programmers will play a lot more around with threads than before thanks to TPL, PLINQ that do come with .NET 4. Multithreading is hyped quit a lot in the press and everybody wants to use threads. But if the .NET Framework makes it nearly impossible to track down the easiest UI multithreading issue I have a problem with that. The problem has been reported but obviously not been solved. .NET 4 Beta 2 did not have changed that dreaded GetBaseException call in MarshaledInvoke to return only the innermost exception of the complete exception stack. It is really time to fix this. WPF on the other hand does the right thing and wraps the exceptions inside a TargetInvocationException which makes much more sense. But Not everybody uses WPF for its daily work and Windows forms applications will still be used for a long time. Below is the code to repro the issues shown and how the exceptions can be rendered in a meaningful way. The default Exception.ToString implementation generates a hard to interpret stack if several nested exceptions did occur. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.Threading; using System.Globalization; using System.Runtime.CompilerServices;   namespace WindowsFormsApplication1 {     public partial class AppForm : Form     {         enum CallMode         {             Direct = 0,             BeginInvoke = 1,             Invoke = 2         };           public AppForm()         {             InitializeComponent();             Thread.CurrentThread.CurrentUICulture = CultureInfo.InvariantCulture;             Application.ThreadException += new System.Threading.ThreadExceptionEventHandler(Application_ThreadException);         }           void Application_ThreadException(object sender, System.Threading.ThreadExceptionEventArgs e)         {             cOutput.Text = PrintException(e.Exception, 0, null).ToString();         }           private void cDirectUnhandled_Click(object sender, EventArgs e)         {             InvokingFunction(CallMode.Direct);         }           private void cDirectCall_Click(object sender, EventArgs e)         {             try             {                 InvokingFunction(CallMode.Direct);             }             catch (Exception ex)             {                 cOutput.Text = PrintException(ex, 0, null).ToString();             }         }           private void cInvoke_Click(object sender, EventArgs e)         {             InvokingFunction(CallMode.Invoke);         }           private void cBeginInvokeCall_Click(object sender, EventArgs e)         {             InvokingFunction(CallMode.BeginInvoke);         }           [MethodImpl(MethodImplOptions.NoInlining)]         void InvokingFunction(CallMode mode)         {             switch (mode)             {                 case CallMode.Direct:                     GenerateError();                     break;                 case CallMode.Invoke:                     this.Invoke(new MethodInvoker(GenerateError));                     break;                 case CallMode.BeginInvoke:                     IAsyncResult res = this.BeginInvoke(new MethodInvoker(GenerateError));                     this.EndInvoke(res);                     break;             }         }           [MethodImpl(MethodImplOptions.NoInlining)]         void GenerateError()         {             F1();         }           private void F1()         {             try             {                 F2();             }             catch (Exception ex)             {                 throw new NotSupportedException("Outer Exception", ex);             }         }           private void F2()         {            throw new NotImplementedException("Inner Exception");         }           StringBuilder PrintException(Exception ex, int identLevel, StringBuilder sb)         {             StringBuilder builtStr = sb;             if( builtStr == null )                 builtStr = new StringBuilder();               if( ex == null )                 return builtStr;                 WriteLine(builtStr, String.Format("{0}: {1}", ex.GetType().FullName, ex.Message), identLevel);             WriteLine(builtStr, String.Format("StackTrace: {0}", ShortenStack(ex.StackTrace)), identLevel + 1);             builtStr.AppendLine();               return PrintException(ex.InnerException, ++identLevel, builtStr);         }               void WriteLine(StringBuilder sb, string msg, int identLevel)         {             foreach (string trimmedLine in SplitToLines(msg)                                            .Select( (line) => line.Trim()) )             {                 for (int i = 0; i < identLevel; i++)                     sb.Append('\t');                 sb.Append(trimmedLine);                 sb.AppendLine();             }         }           string ShortenStack(string stack)         {             int nonAppFrames = 0;             // Skip stack frames not part of our app but include two foreign frames and skip the rest             // If our stack frame is encountered reset counter to 0             return SplitToLines(stack)                               .Where((line) =>                               {                                   nonAppFrames = line.Contains("WindowsFormsApplication1") ? 0 : nonAppFrames + 1;                                   return nonAppFrames < 3;                               })                              .Select((line) => line)                              .Aggregate("", (current, line) => current + line + Environment.NewLine);         }           static char[] NewLines = Environment.NewLine.ToCharArray();         string[] SplitToLines(string str)         {             return str.Split(NewLines, StringSplitOptions.RemoveEmptyEntries);         }     } }

    Read the article

  • How do I free up more space in /boot?

    - by user6722
    My /boot partition is nearly full and I get a warning every time I reboot my system. I already deleted old kernel packages (linux-headers...), actually I did that to install a newer kernel version that came with the automatic updates. After installing that new version, the partition is nearly full again. So what else can I delete? Are there some other files associated to the old kernel images? Here is a list of files that are on my /boot partition: :~$ ls /boot/ abi-2.6.31-21-generic lost+found abi-2.6.32-25-generic memtest86+.bin abi-2.6.38-10-generic memtest86+_multiboot.bin abi-2.6.38-11-generic System.map-2.6.31-21-generic abi-2.6.38-12-generic System.map-2.6.32-25-generic abi-2.6.38-8-generic System.map-2.6.38-10-generic abi-3.0.0-12-generic System.map-2.6.38-11-generic abi-3.0.0-13-generic System.map-2.6.38-12-generic abi-3.0.0-14-generic System.map-2.6.38-8-generic boot System.map-3.0.0-12-generic config-2.6.31-21-generic System.map-3.0.0-13-generic config-2.6.32-25-generic System.map-3.0.0-14-generic config-2.6.38-10-generic vmcoreinfo-2.6.31-21-generic config-2.6.38-11-generic vmcoreinfo-2.6.32-25-generic config-2.6.38-12-generic vmcoreinfo-2.6.38-10-generic config-2.6.38-8-generic vmcoreinfo-2.6.38-11-generic config-3.0.0-12-generic vmcoreinfo-2.6.38-12-generic config-3.0.0-13-generic vmcoreinfo-2.6.38-8-generic config-3.0.0-14-generic vmcoreinfo-3.0.0-12-generic extlinux vmcoreinfo-3.0.0-13-generic grub vmcoreinfo-3.0.0-14-generic initrd.img-2.6.31-21-generic vmlinuz-2.6.31-21-generic initrd.img-2.6.32-25-generic vmlinuz-2.6.32-25-generic initrd.img-2.6.38-10-generic vmlinuz-2.6.38-10-generic initrd.img-2.6.38-11-generic vmlinuz-2.6.38-11-generic initrd.img-2.6.38-12-generic vmlinuz-2.6.38-12-generic initrd.img-2.6.38-8-generic vmlinuz-2.6.38-8-generic initrd.img-3.0.0-12-generic vmlinuz-3.0.0-12-generic initrd.img-3.0.0-13-generic vmlinuz-3.0.0-13-generic initrd.img-3.0.0-14-generic vmlinuz-3.0.0-14-generic Currently, I'm using the 3.0.0-14-generic kernel.

    Read the article

  • What is a better abstraction layer for D3D9 and OpenGL vertex data management?

    - by Sam Hocevar
    My rendering code has always been OpenGL. I now need to support a platform that does not have OpenGL, so I have to add an abstraction layer that wraps OpenGL and Direct3D 9. I will support Direct3D 11 later. TL;DR: the differences between OpenGL and Direct3D cause redundancy for the programmer, and the data layout feels flaky. For now, my API works a bit like this. This is how a shader is created: Shader *shader = Shader::Create( " ... GLSL vertex shader ... ", " ... GLSL pixel shader ... ", " ... HLSL vertex shader ... ", " ... HLSL pixel shader ... "); ShaderAttrib a1 = shader->GetAttribLocation("Point", VertexUsage::Position, 0); ShaderAttrib a2 = shader->GetAttribLocation("TexCoord", VertexUsage::TexCoord, 0); ShaderAttrib a3 = shader->GetAttribLocation("Data", VertexUsage::TexCoord, 1); ShaderUniform u1 = shader->GetUniformLocation("WorldMatrix"); ShaderUniform u2 = shader->GetUniformLocation("Zoom"); There is already a problem here: once a Direct3D shader is compiled, there is no way to query an input attribute by its name; apparently only the semantics stay meaningful. This is why GetAttribLocation has these extra arguments, which get hidden in ShaderAttrib. Now this is how I create a vertex declaration and two vertex buffers: VertexDeclaration *decl = VertexDeclaration::Create( VertexStream<vec3,vec2>(VertexUsage::Position, 0, VertexUsage::TexCoord, 0), VertexStream<vec4>(VertexUsage::TexCoord, 1)); VertexBuffer *vb1 = new VertexBuffer(NUM * (sizeof(vec3) + sizeof(vec2)); VertexBuffer *vb2 = new VertexBuffer(NUM * sizeof(vec4)); Another problem: the information VertexUsage::Position, 0 is totally useless to the OpenGL/GLSL backend because it does not care about semantics. Once the vertex buffers have been filled with or pointed at data, this is the rendering code: shader->Bind(); shader->SetUniform(u1, GetWorldMatrix()); shader->SetUniform(u2, blah); decl->Bind(); decl->SetStream(vb1, a1, a2); decl->SetStream(vb2, a3); decl->DrawPrimitives(VertexPrimitive::Triangle, NUM / 3); decl->Unbind(); shader->Unbind(); You see that decl is a bit more than just a D3D-like vertex declaration, it kinda takes care of rendering as well. Does this make sense at all? What would be a cleaner design? Or a good source of inspiration?

    Read the article

  • Enterprise MDM: Rationalizing Reference Data in a Fast Changing Environment

    - by Mala Narasimharajan
    By Rahul Kamath Enterprises must move at a rapid pace to establish and retain global market leadership by continuously focusing on operational efficiency, customer intimacy and relentless execution. Reference Data Management    As multi-national companies with a presence in multiple industry categories, market segments, and geographies, their ability to proactively manage changes and harness them to align their front office with back-office operations and performance management initiatives is critical to make the proverbial elephant dance. Managing reference data including types and codes, business taxonomies, complex relationships as well as mappings represent a key component of the broader agenda for enabling flexibility and agility, without sacrificing enterprise-level consistency, regulatory compliance and control. Financial Transformation  Periodically, companies find that processes implemented a decade or more ago no longer mirror the way of doing business and seek to proactively transform how they operate their business and underlying processes. Financial transformation often begins with the redesign of one’s chart of accounts. The ability to model and redesign one’s chart of accounts collaboratively, quickly validate against historical transaction bases and secure business buy-in across multiple line of business stakeholders, while continuing to manage changes within the legacy general ledger systems and downstream analytical applications while piloting the in-flight transformation can mean the difference between controlled success and project failure. Attend the session titled CON8275 - Oracle Hyperion Data Relationship Management: Enabling Enterprise Transformation at Oracle Openworld on Monday, October 1, 2012 at 4:45pm in Ballroom A of the InterContinental Hotel to learn how Oracle’s Data Relationship Management solution can help you stay ahead of the competition and proactively harness master (and reference) data changes to transform your enterprise. Hear in-depth customer testimonials from GE Healthcare and Old Mutual South Africa to learn how others have harnessed this technology effectively to build enduring competitive advantage through business process innovation and investments in master data governance. Hear GE Healthcare discuss how DRM has enabled financial transformation, ERP consolidation, mergers and acquisitions, and the alignment reference data across financial and management reporting applications. Also, learn how Old Mutual SA has upgraded to EBS R12 Financials and is transforming the management of chart of accounts for corporate reporting. Separately, an esteemed panel of DRM customers including Cisco Systems, Nationwide Insurance, Ralcorp Holdings and Mentor Graphics will discuss their perspectives on how DRM has helped them address business challenges associated with enterprise MDM including major change management initiatives including financial transformations, corporate restructuring, mergers & acquisitions, and the rationalization of financial and analytical master reference data to support alternate business perspectives for the alignment of EPM/BI initiatives. Attend the session titled CON9377 - Customer Showcase: Success with Oracle Hyperion Data Relationship Management at Openworld on Thursday, October 4, 2012 at 12:45pm in Ballroom of the InterContinental Hotel to interact with our esteemed speakers first hand.

    Read the article

  • How to avoid HDD spin up at system start? (Ubuntu from SSD)

    - by Oliver
    Thanks to hdparm -B1 /dev/sdb my HDD does no longer spin up when powered up on boot. But after completing the BIOS POST messages and starting Ubuntu the HDD gets a signal over the SATA data cable and spins up. Leaving the data cable (but still with plugged in SATA power cable) let the system boot up completely from my SSD without spinning up the HDD. What causes the HDD to spin up? Maybe Grub2? Edit: nope, doesn't seem to be Grub2 that spins up the drive. I just set up Grub to show its menu without timer. Nothings happens until I hit the Ubuntu standard boot option, then a few seconds later the drive spins up.

    Read the article

  • fsck: FILE SYSTEM WAS MODIFIED after each check with -c, why?

    - by Chris
    Hi I use a script to partition and format CF cards (connected with a USB card writer) in an automated way. After the main process I check the card again with fsck. To check bad blocks I also tried the '-c' switch, but I always get a return value != 0 and the message "FILE SYSTEM WAS MODIFIED" (see below). I get the same result when checking the very same drive several times... Does anyone know why a) the file system is modified at all and b) why this seems to happen every time I check and not only in case of an error (like bad blocks)? Here's the output: linux-box# fsck.ext3 -c /dev/sdx1 e2fsck 1.40.2 (12-Jul-2007) Checking for bad blocks (read-only test): done Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Volume (/dev/sdx1): ***** FILE SYSTEM WAS MODIFIED ***** Volume (/dev/sdx1): 5132/245760 files (1.2% non-contiguous), 178910/1959896 blocks Thanks, Chris

    Read the article

  • links for 2011-02-08

    - by Bob Rhubart
    When It Comes to Data Integration, Oracle Is the Right Choice (tags: ping.fm) When It Comes to Data Integration, Oracle Is the Right Choice (tags: ping.fm) Webcast: Webcast: Deploy Oracle VM Templates for Oracle E-Business Suite and Oracle PeopleSoft Enterprise Applications. Feb 15. Event Date: 02/15/2011 9:00am PT / Noon ET. Featured Speakers: Adam Hawley (Oracle Senior Director, Product Management, Virtualization), Ivo Dujmovic (Oracle Director, Technology Integration), Greg Kelly (Oracle Product Strategy Manager - PeopleTools). (tags: oracle virtualization peoplesoft) Webcast: Managing Oracle Exadata with Oracle Enterprise Manager 11g Thursday, February 10, 2011 - 10 a.m. PT/1 p.m. ET. Ask Oracle experts questions and learn firsthand how to efficiently manage all stages of Oracle Exadata’s lifecycle, from testing to deployment. (tags: oracle exalogic enterprisemanager) Arthur Cole: Winning the Consolidated Data Center Future | ITBusinessEdge.com "According to InformationWeek, the amount of data under management is increasing by about 20 percent per year, with some organizations having to deal with 50 percent or more. That means capacity needs to double every two or three years." - Arthur Cole (tags: dataconsolidation enterprisearchitecture) Transformation of Product Management in Telecommunications for Rapid Launch of Next Generation Products (Telecommunications Architecture Corner) Raul Goycoolea's post examines "how enterprise product management enabled by PLM-based product catalogue solutions helps to launch next generation products rapidly in the context of the Telecommunication Industry." (tags: oracle otn enterprisearchitecture) Richard Veryard on Architecture: What is an EA vendor? "Even some people who insist that enterprise architecture shouldn't be thought of as merely software architecture seem to think that 'tools' only means 'software tools.'" - Richard Veryard (tags: enterprisearchitecture) MDM for Tax Authorities (Oracle Master Data Management) "Tax Authorities face a multitude of IT challenges," says David Butler. "Compounding these issues is the fact that the IT architectures in operation at most revenue and collections agencies are very complex." (tags: oracle otn MDM ITarchitecture) Bernard Golden: How Cloud Computing Changes IT Staffs | CIO.com | CIO.com "Enterprise architects become more important" tops Bernard's list of changes. (tags: cloudcomputing staffing cio enterprisearchitecture) Martijn Linssen: Social Enterprise Magic Quadrant "Revolutions usually go wrong, where evolutions usually go right." - Martijn Linssen (tags: socialcomputing enterprise2.0) Why Do IT Roles Fail? | CIO "The roles that come up most often are the ones that are not directly building or maintaining systems. These include architecture, planning, vendor management, relationship management, PMO, and security." - Marc Cecere (tags: softwarearchitecture technologyroles) We're Hiring! - Server and Desktop Virtualization Product Management (Oracle's Virtualization Blog) Adam Hawley with information on an opportunity for qualified job seekers. (tags: oracle otn employment virtualization)

    Read the article

  • How can I move a load of zone records from a web based system to text based one?

    - by Chris Adams
    Hi there, I have a few domains with Dreamhost where I have set a load of records using their web based domain name system, and I've like to move them to another provider that lets me enter info directly as a text file for their name server, bind 9 to use. (If you're interested, I'm moving them to Gandi.net). Previously when I used a cpanel based system to do something similar, there was a tool that let me simply enter a domain name, and any available domains were automagically entered into a system, saving me typing it myself (and bringing down sites with silly typos in the process). What open source tool can I use to query a domain for all the relevant subdomains and records and list them in a format like a zone file, that I can use with other name servers?

    Read the article

  • WPF: Reloading app parts to handle persistence as well as memory management.

    - by Ingó Vals
    I created a app using Microsoft's WPF. It mostly handles data reading and input as well as associating relations between data within specific parameters. As a total beginner I made some bad design decision ( not so much decisions as using the first thing I got to work ) but now understanding WPF better I'm getting the urge to refactor my code with better design principles. I had several problems but I guess each deserves it's own question for clarity. Here I'm asking for proper ways to handle the data itself. In the original I wrapped each row in a object when fetched from database ( using LINQ to SQL ) somewhat like Active Record just not active or persistence (each app instance had it's own data handling part). The app has subunits handling different aspects. However as it was setup it loaded everything when started. This creates several problems, for example often it wouldn't be neccesary to load a part unless we were specifically going to work with that part so I wan't some form of lazy loading. Also there was problem with inner persistance because you might create a new object/row in one aspect and perhaps set relation between it and different object but the new object wouldn't appear until the program was restarted. Persistance between instances of the app won't be huge problem because of the small amount of people using the program. While I could solve this now using dirty tricks I would rather refactor the program and do it elegantly, Now the question is how. I know there are several ways and a few come to mind: 1) Each aspect of the program is it's own UserControl that get's reloaded/instanced everytime you navigate to it. This ensures you only load up the data you need and you get some persistancy. DB server located on same LAN and tables are small so that shouldn't be a big problem. Minor drawback is that you would have to remember the state of each aspect so you wouldn't always start at beginners square. 2) Having a ViewModel type object at the base level of the app with lazy loading and some kind of timeout. I would then propegate this object down the visual tree to ensure every aspect is getting it's data from the same instance 3) Semi active record data layer with static load methods. 4) Some other idea What in your opinion is the most practical way in WPF, what does MVVM assume?

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >