Search Results

Search found 15209 results on 609 pages for 'configuration'.

Page 539/609 | < Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >

  • ASP.NET Routing not working on IIS 7.0

    - by Rick Strahl
    I ran into a nasty little problem today when deploying an application using ASP.NET 4.0 Routing to my live server. The application and its Routing were working just fine on my dev machine (Windows 7 and IIS 7.5), but when I deployed (Windows 2008 R1 and IIS 7.0) Routing would just not work. Every time I hit a routed url IIS would just throw up a 404 error: This is an IIS error, not an ASP.NET error so this doesn’t actually come from ASP.NET’s routing engine but from IIS’s handling of expressionless URLs. Note that it’s clearly falling through all the way to the StaticFile handler which is the last handler to fire in the typical IIS handler list. In other words IIS is trying to parse the extension less URL and not firing it into ASP.NET but failing. As I mentioned on my local machine this all worked fine and to make sure local and live setups match I re-copied my Web.config, double checked handler mappings in IIS and re-copied the actual application assemblies to the server. It all looked exactly matched. However no workey on the server with IIS 7.0!!! Finally, totally by chance, I remembered the runAllManagedModulesForAllRequests attribute flag on the modules key in web.config and set it to true: <system.webServer> <modules runAllManagedModulesForAllRequests="true"> <add name="ScriptCompressionModule" type="Westwind.Web.ScriptCompressionModule,Westwind.Web" /> </modules> </system.webServer> And lo and behold, Routing started working on the live server and IIS 7.0! This seems really obvious now of course, but the really tricky thing about this is that on IIS 7.5 this key is not necessary. So on my Windows 7 machine ASP.NET Routing was working just fine without the key set. However on IIS 7.0 on my live server the same missing setting was not working. On IIS 7.0 this key must be present or Routing will not work. Oddly on IIS 7.5 it appears that you can’t even turn off the behavior – setting runtAllManagedModuleForAllRequests="false" had no effect at all and Routing continued to work just fine even with the flag set to false, which is NOT what I would have expected. Kind of disappointing too that Windows Server 2008 (R1) can’t be upgraded to IIS 7.5. It sure seems like that should have been possible since the OS server core changes in R2 are pretty minor. For the future I really hope Microsoft will allow updating IIS versions without tying them explicitly to the OS. It looks like that with the release of IIS Express Microsoft has taken some steps to untie some of those tight OS links from IIS. Let’s hope that’s the case for the future – it sure is nice to run the same IIS version on dev and live boxes, but upgrading live servers is too big a deal to do just because an updated OS release came out. Moral of the story – never assume that your dev setup will work as is on the live setup. It took me forever to figure this out because I assumed that because my web.config on the local machine was fine and working and I copied all relevant web.config data to the server it can’t be the configuration settings. I was looking everywhere but in the .config file forever before getting desperate and remembering the flag when I accidentally checked the intellisense settings in the modules key. Never assume anything. The other moral is: Try to keep your dev machine and server OS’s in sync whenever possible. Maybe it’s time to upgrade to Windows Server 2008 R2 after all. More info on Extensionless URLs in IIS Want to find out more exactly on how extensionless Urls work on IIS 7? The check out  How ASP.NET MVC Routing Works and its Impact on the Performance of Static Requests which goes into great detail on the complexities of the process. Thanks to Jeff Graves for pointing me at this article – a great linked reference for this topic!© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7  Windows  

    Read the article

  • Why is my machine unable to mount my SMB drives ("CIFS VFS: Error connecting to socket. Aborting operation", return code -115)?

    - by downbeat
    I have a machine running Precise (12.04 x64), and I cannot mount my SMB drives (I have 3, we'll call them public, private and download). It used to work (a week or two ago) and I didn't touch fstab! The machine hosting the shares is a commercial NAS, and I'm not seeing anything that would indicate it's an issue with the NAS. I have an older machine which I updated to Precise at the same time (both fresh installed, not dist-upgrade), so should have a very similar configuration. It is not having any problems. I am not having problems on windows machines/partitions either, only one of my Precise machines. The two machines are using identical entries in fstab and identical /etc/samba/smb.conf files. I don't think I've ever changed smb.conf (has never mattered before). My fstab entries all basically look like this: //10.1.1.111/public /media/public cifs credentials=/home/downbeat/.credentials,iocharset=utf8,uid=downbeat,gid=downbeat,file_mode=0644,dir_mode=0755 0 0 Here's the dmesg output on boot: [ 51.162198] CIFS VFS: Error connecting to socket. Aborting operation [ 51.162369] CIFS VFS: cifs_mount failed w/return code = -115 [ 51.194106] CIFS VFS: Error connecting to socket. Aborting operation [ 51.194250] CIFS VFS: cifs_mount failed w/return code = -115 [ 51.198120] CIFS VFS: Error connecting to socket. Aborting operation [ 51.198243] CIFS VFS: cifs_mount failed w/return code = -115 There are no other errors I see in the dmesg output. Originally when I ran 'testparm -s', the output contained these lines ERROR: lock directory /var/run/samba does not exist ERROR: pid directory /var/run/samba does not exist Here's the samba related programs I have installed: $ dpkg --list|grep -i samba ii libpam-winbind 2:3.6.3-2ubuntu2.3 Samba nameservice and authentication integration plugins ii libwbclient0 2:3.6.3-2ubuntu2.3 Samba winbind client library ii nautilus-share 0.7.3-1ubuntu2 Nautilus extension to share folder using Samba ii python-smbc 1.0.13-0ubuntu1 Python bindings for Samba clients (libsmbclient) ii samba-common 2:3.6.3-2ubuntu2.3 common files used by both the Samba server and client ii samba-common-bin 2:3.6.3-2ubuntu2.3 common files used by both the Samba server and client ii winbind 2:3.6.3-2ubuntu2.3 Samba nameservice integration server $ dpkg --list|grep -i smb ii dmidecode 2.11-4 SMBIOS/DMI table decoder ii libsmbclient 2:3.6.3-2ubuntu2.3 shared library for communication with SMB/CIFS servers ii python-smbc 1.0.13-0ubuntu1 Python bindings for Samba clients (libsmbclient) ii smbclient 2:3.6.3-2ubuntu2.3 command-line SMB/CIFS clients for Unix ii smbfs 2:5.1-1ubuntu1 Common Internet File System utilities - compatibility package $ dpkg --list|grep -i cifs ii cifs-utils 2:5.1-1ubuntu1 Common Internet File System utilities ii libsmbclient 2:3.6.3-2ubuntu2.3 shared library for communication with SMB/CIFS servers ii smbclient 2:3.6.3-2ubuntu2.3 command-line SMB/CIFS clients for Unix I originally noticed that my other machine had "libpam-winbind" and "nautilus-share" installed and the machine with the issue did not. Installing those two packages solved my errors with 'testparm -s', but did not fix my issue. Finally, I tried to purge and reinstall these packages smbclient smbfs cifs-utils samba-common samba-common-bin Still no luck. Again, it used to work; now it doesn't. Very similarly configured machine works (but some packages are out of date on the working machine). The NAS has only one interface/IP address, nmblookup works to find it's IP from it's hostname (from the machine with the issue) and it responds to a ping. Please any help would be great. I've been searching on AskUbuntu, SuperUser, ubuntuforums and plain old search engines for a week now and it's driving me crazy!

    Read the article

  • Banshee encountered a Fatal Error (sqlite error 11: database disk image is malformed)

    - by Nik
    I am running ubuntu 10.10 Maverick Meerkat, and recently I am helping in testing out indicator-weather using the unstable buids. However there was a bug which caused my system to freeze suddenly (due to indicator-weather not ubuntu) and the only way to recover is to do a hard reset of the system. This happened a couple of times. And when i tried to open banshee after a couple of such resets I get the following fatal error which forces me to quit banshee. The screenshot is not clear enough to read the error, so I am posting it below, An unhandled exception was thrown: Sqlite error 11: database disk image is malformed (SQL: BEGIN TRANSACTION; DELETE FROM CoreSmartPlaylistEntries WHERE SmartPlaylistID IN (SELECT SmartPlaylistID FROM CoreSmartPlaylists WHERE IsTemporary = 1); DELETE FROM CoreSmartPlaylists WHERE IsTemporary = 1; COMMIT TRANSACTION) at Hyena.Data.Sqlite.Connection.CheckError (Int32 errorCode, System.String sql) [0x00000] in <filename unknown>:0 at Hyena.Data.Sqlite.Connection.Execute (System.String sql) [0x00000] in <filename unknown>:0 at Hyena.Data.Sqlite.HyenaSqliteCommand.Execute (Hyena.Data.Sqlite.HyenaSqliteConnection hconnection, Hyena.Data.Sqlite.Connection connection) [0x00000] in <filename unknown>:0 Exception has been thrown by the target of an invocation. at System.Reflection.MonoCMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <filename unknown>:0 at System.Reflection.MonoCMethod.Invoke (BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <filename unknown>:0 at System.Reflection.ConstructorInfo.Invoke (System.Object[] parameters) [0x00000] in <filename unknown>:0 at System.Activator.CreateInstance (System.Type type, Boolean nonPublic) [0x00000] in <filename unknown>:0 at System.Activator.CreateInstance (System.Type type) [0x00000] in <filename unknown>:0 at Banshee.Gui.GtkBaseClient.Startup () [0x00000] in <filename unknown>:0 at Hyena.Gui.CleanRoomStartup.Startup (Hyena.Gui.StartupInvocationHandler startup) [0x00000] in <filename unknown>:0 .NET Version: 2.0.50727.1433 OS Version: Unix 2.6.35.27 Assembly Version Information: gkeyfile-sharp (1.0.0.0) Banshee.AudioCd (1.9.0.0) Banshee.MiniMode (1.9.0.0) Banshee.CoverArt (1.9.0.0) indicate-sharp (0.4.1.0) notify-sharp (0.4.0.0) Banshee.SoundMenu (1.9.0.0) Banshee.Mpris (1.9.0.0) Migo (1.9.0.0) Banshee.Podcasting (1.9.0.0) Banshee.Dap (1.9.0.0) Banshee.LibraryWatcher (1.9.0.0) Banshee.MultimediaKeys (1.9.0.0) Banshee.Bpm (1.9.0.0) Banshee.YouTube (1.9.0.0) Banshee.WebBrowser (1.9.0.0) Banshee.Wikipedia (1.9.0.0) pango-sharp (2.12.0.0) Banshee.Fixup (1.9.0.0) Banshee.Widgets (1.9.0.0) gio-sharp (2.14.0.0) gudev-sharp (1.0.0.0) Banshee.Gio (1.9.0.0) Banshee.GStreamer (1.9.0.0) System.Configuration (2.0.0.0) NDesk.DBus.GLib (1.0.0.0) gconf-sharp (2.24.0.0) Banshee.Gnome (1.9.0.0) Banshee.NowPlaying (1.9.0.0) Mono.Cairo (2.0.0.0) System.Xml (2.0.0.0) Banshee.Core (1.9.0.0) Hyena.Data.Sqlite (1.9.0.0) System.Core (3.5.0.0) gdk-sharp (2.12.0.0) Mono.Addins (0.4.0.0) atk-sharp (2.12.0.0) Hyena.Gui (1.9.0.0) gtk-sharp (2.12.0.0) Banshee.ThickClient (1.9.0.0) Nereid (1.9.0.0) NDesk.DBus.Proxies (0.0.0.0) Mono.Posix (2.0.0.0) NDesk.DBus (1.0.0.0) glib-sharp (2.12.0.0) Hyena (1.9.0.0) System (2.0.0.0) Banshee.Services (1.9.0.0) Banshee (1.9.0.0) mscorlib (2.0.0.0) Platform Information: Linux 2.6.35-27-generic i686 unknown GNU/Linux Disribution Information: [/etc/lsb-release] DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.10 DISTRIB_CODENAME=maverick DISTRIB_DESCRIPTION="Ubuntu 10.10" [/etc/debian_version] squeeze/sid Just to make it clear, this happened only after the hard resets and not before. I used to use banshee everyday and it worked perfectly. Can anyone help me fix this?

    Read the article

  • SQL SERVER – Guest Post – Glenn Berry – Wait Type – Day 26 of 28

    - by pinaldave
    Glenn Berry works as a Database Architect at NewsGator Technologies in Denver, CO. He is a SQL Server MVP, and has a whole collection of Microsoft certifications, including MCITP, MCDBA, MCSE, MCSD, MCAD, and MCTS. He is also an Adjunct Faculty member at University College – University of Denver, where he has been teaching since 2000. He is one wonderful blogger and often blogs at here. I am big fan of the Dynamic Management Views (DMV) scripts of Glenn. His script are extremely popular and the reality is that he has inspired me to start this series with his famous DMV which I have mentioned in very first  wait stats blog post (I had forgot to request his permission to re-use the script but when asked later on his whole hearty approved it). Here is is his excellent blog post on this subject of wait stats: Analyzing cumulative wait stats in SQL Server 2005 and above has become a popular and effective technique for diagnosing performance issues and further focusing your troubleshooting and diagnostic  efforts.  Rather than just guessing about what resource(s) that SQL Server is waiting on, you can actually find out by running a relatively simple DMV query. Once you know what resources that SQL Server is spending the most time waiting on, you can run more specific queries that focus on that resource to get a better idea what is causing the problem. I do want to throw out a few caveats about using wait stats as a diagnostic tool. First, they are most useful when your SQL Server instance is experiencing performance problems. If your instance is running well, with no indication of any resource pressure from other sources, then you should not worry that much about what the top wait types are. SQL Server will always be waiting on some resource, but many wait types are quite benign, and can be safely ignored. In spite of this, I quite often see experienced DBAs obsessing over the top wait type, even when their SQL Server instance is running extremely well. Second, I often see DBAs jump to the wrong conclusion based on seeing a particular well-known wait type. A good example is CXPACKET waits. People typically jump to the conclusion that high CXPACKET waits means that they should immediately change their instance-level MADOP setting to 1. This is not always the best solution. You need to consider your workload type, and look carefully for any important “missing” indexes that might be causing the query optimizer to use a parallel plan to compensate for the missing index. In this case, correcting the index problem is usually a better solution than changing MAXDOP, since you are curing the disease rather than just treating the symptom. Finally, you should get in the habit of clearing out your cumulative wait stats with the  DBCC SQLPERF(‘sys.dm_os_wait_stats’, CLEAR); command. This is especially important if you have made an configuration or index changes, or if your workload has changed recently. Otherwise, your cumulative wait stats will be polluted with the old stats from weeks or months ago (since the last time SQL Server was started or the stats were cleared).  If you make a change to your SQL Server instance, or add an index, you should clear out your wait stats, and then wait a while to see what your new top wait stats are. At any rate, enjoy Pinal Dave’s series on Wait Stats. This blog post has been written by Glenn Berry (Twitter | Blog) Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Closer look at the SOA 12c Feature: Oracle Managed File Transfer

    - by Tshepo Madigage-Oracle
    The rapid growth of cloud-based applications in the enterprise, combined with organizations' desire to integrate applications with mobile technologies, is dramatically increasing application integration complexity. To meet this challenge, Oracle introduced Oracle SOA Suite 12c, the latest version of the industry's most complete and unified application integration and SOA solution. With simplified cloud, mobile, on-premises, and Internet of Things (IoT) integration capabilities, all within a single platform, Oracle SOA Suite 12c helps organizations speed time to integration, improve productivity, and lower TCO. To extend its B2B solution capabilities with Oracle SOA Suite 12c, Oracle unveiled Oracle Managed File Transfer, an integrated solution that enables organizations to virtually eliminate file transfer complexities. This allows customers to load data securely into Oracle Cloud applications as well as third-party cloud or partner applications. Oracle Managed File Transfer (Oracle MFT) enables secure file exchange and management with internal departments and external partners. It protects against inadvertent access to unsecured files at every step in the end-to-end transfer of files. It is easy to use especially for non technical staff so you can leverage more resources to manage the transfer of files. The extensive reporting capabilities allow you to get quick status of a file transfer and resubmit it as required. You can protect data in your DMZ by using the SSH/FTP reverse proxy. Oracle Managed File Transfer can help integrate applications by transferring files between them in complex use case patterns. Standalone: Transferring files on its own using embedded FTP and sFTP servers and the file systems to which it has access. SOA Integration: a SOA application can be the source or target of a transfer. A SOA application can also be the common endpoint for the target of one transfer and the source of another. B2B Integration: B2B application can be the source or target of a transfer. A B2B application can also be the common endpoint for the target of one transfer and the source of another. Healthcare Integration:  Healthcare application can be the source or target of a transfer. A Healthcare application can also be the common endpoint for the target of one transfer and the source of another. Oracle Service Bus (OSB) integration: OMT can integrate with Oracle Service Bus web service interfaces. OSB interface can be the source or target of a transfer. An Oracle Service Bus interface can also be the common endpoint for the target of one transfer and the source of another. Hybrid Integration: can be one participant in a web of data transfers that includes multiple application types. Oracle Managed File Transfers has four user roles: file handlers, designers, monitors, and administrators. File Handlers: - Copy files to file transfer staging areas, which are called sources. - Retrieve files from file transfer destinations, which are called targets. Designers: - Create, read, update and delete file transfer sources. - Create, read, update and delete file transfer targets. - Create, read, update and delete transfers, which link sources and targets in complete file delivery flows. - Deploy and test transfers. Monitors: - Use the Dashboard and reports to ensure that transfer instances are successful. - Pause and resume lengthy transfers. - Troubleshoot errors and resubmit transfers. - View artifact deployment details and history. - View artifact dependence relationships. - Enable and disable sources, targets, and transfers. - Undeploy sources, targets, and transfers. - Start and stop embedded FTP and sFTP servers. Administrators: - All file handler tasks - All designer tasks - All monitor tasks - Add other users and determine their roles - Configure user directory permissions - Configure the Oracle Managed File Transfer server - Configure embedded FTP and sFTP servers, including security - Configure B2B and Healthcare domains - Back up and restore the Oracle Managed File Transfer configuration - Purge transferred files and instance data - Archive and restore instance data and payloads - Import and export metadata You will find all the related information about SOA 12.1.3. Oracle Manages File Transfer OMT in the documentation: Using Oracle Manages File Transfer Resources and links: Oracle Unveils Oracle SOA Suite 12c Oracle Managed Files Transfer Oracle Managed Files Transfer SOA 12c White Paper For further enquiries don't hesitate to contact us at [email protected] and join our Partner Webcast on Oracle SOA Suite 12c

    Read the article

  • Adventures in MVVM &ndash; ViewModel Location and Creation

    - by Brian Genisio's House Of Bilz
    More Adventures in MVVM In this post, I am going to explore how I prefer to attach ViewModels to my Views.  I have published the code to my ViewModelSupport project on CodePlex in case you'd like to see how it works along with some examples.  Some History My approach to View-First ViewModel creation has evolved over time.  I have constructed ViewModels in code-behind.  I have instantiated ViewModels in the resources sectoin of the view. I have used Prism to resolve ViewModels via Dependency Injection. I have created attached properties that use Dependency Injection containers underneath.  Of all these approaches, I continue to find issues either in composability, blendability or maintainability.  Laurent Bugnion came up with a pretty good approach in MVVM Light Toolkit with his ViewModelLocator, but as John Papa points out, it has maintenance issues.  John paired up with Glen Block to make the ViewModelLocator more generic by using MEF to compose ViewModels.  It is a great approach, but I don’t like baking in specific resolution technologies into the ViewModelSupport project. I bring these people up, not to name drop, but to give them credit for the place I finally landed in my journey to resolve ViewModels.  I have come up with my own version of the ViewModelLocator that is both generic and container agnostic.  The solution is blendable, configurable and simple to use.  Use any resolution mechanism you want: MEF, Unity, Ninject, Activator.Create, Lookup Tables, new, whatever. How to use the locator 1. Create a class to contain your resolution configuration: public class YourViewModelResolver: IViewModelResolver { private YourFavoriteContainer container = new YourFavoriteContainer(); public YourViewModelResolver() { // Configure your container } public object Resolve(string viewModelName) { return container.Resolve(viewModelName); } } Examples of doing this are on CodePlex for MEF, Unity and Activator.CreateInstance. 2. Create your ViewModelLocator with your custom resolver in App.xaml: <VMS:ViewModelLocator x:Key="ViewModelLocator"> <VMS:ViewModelLocator.Resolver> <local:YourViewModelResolver /> </VMS:ViewModelLocator.Resolver> </VMS:ViewModelLocator> 3. Hook up your data context whenever you want a ViewModel (WPF): <Border DataContext="{Binding YourViewModelName, Source={StaticResource ViewModelLocator}}"> This example uses dynamic properties on the ViewModelLocator and passes the name to your resolver to figure out how to compose it. 4. What about Silverlight? Good question.  You can't bind to dynamic properties in Silverlight 4 (crossing my fingers for Silverlight 5), but you CAN use string indexing: <Border DataContext="{Binding [YourViewModelName], Source={StaticResource ViewModelLocator}}"> But, as John Papa points out in his article, there is a silly bug in Silverlight 4 (as of this writing) that will call into the indexer 6 times when it binds.  While this is little more than a nuisance when getting most properties, it can be much more of an issue when you are resolving ViewModels six times.  If this gets in your way, the solution (as pointed out by John), is to use an IndexConverter (instantiated in App.xaml and also included in the project): <Border DataContext="{Binding Source={StaticResource ViewModelLocator}, Converter={StaticResource IndexConverter}, ConverterParameter=YourViewModelName}"> It is a bit uglier than the WPF version (this method will also work in WPF if you prefer), but it is still not all that bad.  Conclusion This approach works really well (I suppose I am a bit biased).  It allows for composability from any mechanisim you choose.  It is blendable (consider serving up different objects in Design Mode if you wish... or different constructors… whatever makes sense to you).  It works in Cider.  It is configurable.  It is flexible.  It is the best way I have found to manage View-First ViewModel hookups.  Thanks to the guys mentioned in this article for getting me to something I love using.  Enjoy.

    Read the article

  • kubuntu muon package manager stop working

    - by aseed
    i have kubuntu today after updating the muon package manager stuck at 64% so i closes it. and after that when i try to update or reinstall or install software the manger stuck. so how can i reinstall the muon package manger from terminal?? i try sudo apt-get install muon and i get this messege Reading package lists... Done Building dependency tree Reading state information... Done muon is already the newest version. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: libopencv-dev : Depends: libopencv-core-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-ml-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-imgproc-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-video-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-objdetect-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-gpu-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-highgui-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-calib3d-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-flann-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-features2d-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-legacy-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-contrib-dev (= 2.3.1-4ppa1) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). so what to do, i need to reinstall it because it not working ~$ sudo dpkg --configure -a dpkg: dependency problems prevent configuration of libopencv-dev: libopencv-dev depends on libopencv-core-dev (= 2.3.1-4ppa1); however: Package libopencv-core-dev is not installed. libopencv-dev depends on libopencv-ml-dev (= 2.3.1-4ppa1); however: Package libopencv-ml-dev is not installed. libopencv-dev depends on libopencv-imgproc-dev (= 2.3.1-4ppa1); however: Package libopencv-imgproc-dev is not installed. libopencv-dev depends on libopencv-video-dev (= 2.3.1-4ppa1); however: Package libopencv-video-dev is not installed. libopencv-dev depends on libopencv-objdetect-dev (= 2.3.1-4ppa1); however: Package libopencv-objdetect-dev is not installed. libopencv-dev depends on libopencv-gpu-dev (= 2.3.1-4ppa1); however: Package libopencv-gpu-dev is not installed. libopencv-dev depends on libopencv-highgui-dev (= 2.3.1-4ppa1); however: Package libopencv-highgui-dev is not installed. libopencv-dev depends on libopencv-calib3d-dev (= 2.3.1-4ppa1); however: Package libopencv-calib3d-dev is not installed. libopencv-dev depends on libopencv-flann-dev (= 2.3.1-4ppa1); however: Package libopencv-flann-dev is not installed. libopencv-dev depends on libopencv-features2d-dev (= 2.3.1-4ppa1); however: Package libopencv-features2d-dev is not installed. libopencv-dev depends on libopencv-legacy-dev (= 2.3.1-4ppa1); however: Package libopencv-legacy-dev is not installed. libopencv-dev depends on libopencv-contrib-dev (= 2.3.1-4ppa1); however: Package libopencv-contrib-dev is not installed. dpkg: error processing libopencv-dev (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: libopencv-dev sudo apt-get install -f sudo dpkg --configure -a and still same problem... and i think getting this problem because of updating kubuntu today

    Read the article

  • SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at another IO-related wait type. From Book On-Line: Occurs when a task is waiting for I/Os to finish. ASYNC_IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. If by any means your application that’s connected to SQL Server is processing the data very slowly, this type of wait can occur. Several long-running database operations like BACKUP, CREATE DATABASE, ALTER DATABASE or other operations can also create this wait type. Reducing ASYNC_IO_COMPLETION wait: When it is an issue related to IO, one should check for the following things associated to IO subsystem: Look at the programming and see if there is any application code which processes the data slowly (like inefficient loop, etc.). Note that it should be re-written to avoid this  wait type. Proper placing of the files is very important. We should check the file system for proper placement of the files – LDF and MDF on separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk), etc. Check the File Statistics and see if there is a higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly and so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on the development setup (test environment). As soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very likely to happen that there are no proper indexes on the system and yet there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the following two articles I wrote that talk about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Useful SVN and Git commands – Cheatsheet

    - by Madhan ayyasamy
    The following snippets will helpful one who user version control systems like Git and SVN.svn checkout/co checkout-url – used to pull an SVN tree from the server.svn update/up – Used to update the local copy with the changes made in the repository.svn commit/ci – m “message” filename – Used to commit the changes in a file to repository with a message.svn diff filename – shows up the differences between your current file and what’s there now in the repository.svn revert filename – To overwrite local file with the one in the repository.svn add filename – For adding a file into repository, you should commit your changes then only it will reflect in repository.svn delete filename – For deleting a file from repository, you should commit your changes then only it will reflect in repository.svn move source destination – moves a file from one directory to another or renames a file. It will effect your local copy immediately as well as on the repository after committing.git config – Sets configuration values for your user name, email, file formats and more.git init – Initializes a git repository – creates the initial ‘.git’ directory in a new or in an existing project.git clone – Makes a Git repository copy from a remote source. Also adds the original location as a remote so you can fetch from it again and push to it if you have permissions.git add – Adds files changes in your working directory to your index.git rm – Removes files from your index and your working directory so they will not be tracked.git commit – Takes all of the changes written in the index, creates a new commit object pointing to it and sets the branch to point to that new commit.git status – Shows you the status of files in the index versus the working directory.git branch – Lists existing branches, including remote branches if ‘-a’ is provided. Creates a new branch if a branch name is provided.git checkout – Checks out a different branch – switches branches by updating the index, working tree, and HEAD to reflect the chosen branch.git merge – Merges one or more branches into your current branch and automatically creates a new commit if there are no conflicts.git reset – Resets your index and working directory to the state of your last commit.git tag – Tags a specific commit with a simple, human readable handle that never moves.git pull – Fetches the files from the remote repository and merges it with your local one.git push – Pushes all the modified local objects to the remote repository and advances its branches.git remote – Shows all the remote versions of your repository.git log – Shows a listing of commits on a branch including the corresponding details.git show – Shows information about a git object.git diff – Generates patch files or statistics of differences between paths or files in your git repository, or your index or your working directory.gitk – Graphical Tcl/Tk based interface to a local Git repository.

    Read the article

  • Migrating SQL Server Compact Edition (SQL CE) database to SQL Server using Web Matrix

    - by Harish Ranganathan
    One of the things that is keeping us busy is the Web Camps we are delivering across 5 cities.  If you are a reader of this blog, and also attended one of these web camps, there is a good chance that you have seen me since I was there in all the places, so far.  The topics that we cover include Visual Studio 2010 SP1, SQL CE, ASP.NET MVC & HTML5.  Whenever I talk about SQL CE, the immediate response is that, people are wow that Microsoft has shipped a FREE compact edition database, which is an embedded database that can be x-copy deployed.  If you think, well didn’t Microsoft ship SQL Express which is FREE?  The difference is that, SQL Express runs as a service in the machine (if you open SQL Configuration Manager, you can notice that SQL Express is running as a service along with your SQL Server Engine (if you have installed ).  This makes it that, even if you are willing to use SQL Express when you deploy your application, it needs to be installed on the production machine (hosting provider) and it needs to run as a service.  Many hosters don’t allow such services to run on their space. SQL CE comes as a x-Copy deploy-able database with just a few DLLs required to run it on the machine and they don’t even need to be installed in GAC on the production machine.  In fact, if you have Visual Studio 2010 SP1 installed, you can use the “Add Deployable Dependencies” option in Project-Properties and it would detect that SQL CE is something you would probably want to add as a deploy-able dependency for your project.  With that, it bundles the required DLLs as a part of the “_bin_deployableAssemblies” folder.  So your project can be x-Copy deployed and just works fine. However, SQL CE has the limit of 4GB storage space.  Real world applications often require more than just 4GB of data storage and it often turns out that people would like to use SQL CE for development/ramp up stages but would like to migrate to full fledged SQL Server after a while.  So, its only natural that the question arises “How do I move my SQL CE database to SQL Server”  And honestly, it doesn’t come across as a straight forward support.  I was talking to Ambrish Mishra (PM in SQL CE Team, Hyderabad) since I got this question in almost all the places where we talked about SQL CE.   He was kind enough to demonstrate how this can be accomplished using Web Matrix.  Open Web Matrix (Web Matrix can be installed for free from www.microsoft.com/web) and click on “Site from Template” Click on the “Bakery” template (since by default it uses a SQL CE database and has all the required sample data) and click “Ok”. In the project, you can navigate to the Database tab and will be able to find that the Bakery site uses a SQL CE database “bakery.sdf” Select the “bakery.sdf” and you will be able to see the “Migrate” button on the top right Once you click on the “Migrate” button, you will notice that the popup wizard opens up and by default is configured for SQL Express.  You can edit the same to point to your local SQL Server instance, or a remote server. Upon filling in the Server Name, Username and Password, when you click “Ok”, couple of things happen.  1. The database is migrated to SQL Server (local or remote – subject to permissions on remote server).   You can open up SQL Server Management Studio and connect to the server to verify that the “bakery” database exists under “Databases” node. 2. You can also notice that in Web Matrix, when you navigate to the “Files” tab and open up the web.config file, connection string now points to the SQL Server instance (yes, the Migrate button was smart enough to make this change too ) And there it is, your SQL Server Compact Edition database, now migrated to SQL Server!! In a future post, I would explain the steps involved when using Visual Studio. Cheers !!!

    Read the article

  • SQL SERVER – IO_COMPLETION – Wait Type – Day 10 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at an IO-related wait types. From Book On-Line: Occurs while waiting for I/O operations to complete. This wait type generally represents non-data page I/Os. Data page I/O completion waits appear as PAGEIOLATCH_* waits. IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. This is a good indication that IO needs to be looked over here. Reducing IO_COMPLETION wait: When it is an issue concerning the IO, one should look at the following things related to IO subsystem: Proper placing of the files is very important. We should check the file system for proper placement of files – LDF and MDF on a separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk),etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as the configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on development (test environment) set up and as soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very possible that there are no proper indexes in the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the two articles that I wrote; they are about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Types, SQL White Papers, T SQL, Technology

    Read the article

  • Building Extensions Using E-Business Suite SDK for Java

    - by Sara Woodhull
    We’ve just released Version 2.0.1 of Oracle E-Business Suite SDK for Java.  This new version has several great enhancements added after I wrote about the first version of the SDK in 2010.  In addition to the AppsDataSource and Java Authentication and Authorization Service (JAAS) features that are in the first version, the Oracle E-Business Suite SDK for Java now provides: Session management APIs, so you can share session information with Oracle E-Business Suite Setup script for UNIX/Linux for AppsDataSource and JAAS on Oracle WebLogic Server APIs for Message Dictionary, User Profiles, and NLS Javadoc for the APIs (included with the patch) Enhanced documentation included with Note 974949.1 These features can be used with either Release 11i or Release 12.  References AppsDataSource, Java Authentication and Authorization Service, and Utilities for Oracle E-Business Suite (Note 974949.1) FAQ for Integration of Oracle E-Business Suite and Oracle Application Development Framework (ADF) Applications (Doc ID 1296491.1) What's new in those references? Note 974949.1 is the place to look for the latest information as we come out with new versions of the SDK.  The patch number changes for each release.  Version 2.0.1 is contained in Patch 13882058, which is for both Release 11i and Release 12.  Note 974949.1 includes the following topics: Applying the latest patch Using Oracle E-Business Suite Data Sources Oracle E-Business Suite Implementation of Java Authentication and Authorization Service (JAAS) Utilities Error loggingSession management  Message Dictionary User profiles Navigation to External Applications Java EE Session Management Tutorial For those of you using the SDK with Oracle ADF, besides some Oracle ADF-specific documentation in Note 974949.1, we also updated the ADF Integration FAQ as well. EBS SDK for Java Use Cases The uses of the Oracle E-Business Suite SDK for Java fall into two general scenarios for integrating external applications with Oracle E-Business Suite: Application sharing a session with Oracle E-Business Suite Independent application (not shared session) With an independent application, the external application accesses Oracle E-Business  Suite data and server-side APIs, but it has a completely separate user interface. The external application may also launch pages from the Oracle E-Business Suite home page, but after the initial launch there is no further communication with the Oracle E-Business Suite user interface. Shared session integration means that the external application uses an Oracle E-Business Suite session (ICX session), shares session context information with Oracle E-Business Suite, and accesses Oracle E-Business Suite data. The external application may also launch pages from the Oracle E-Business Suite home page, or regions or pages from the external application may be embedded as regions within Oracle Application Framework pages. Both shared session applications and independent applications use the AppsDataSource feature of the Oracle E-Business Suite SDK for Java. Independent applications may also use the Java Authentication and Authorization (JAAS) and logging features of the SDK. Applications that are sharing the Oracle E-Business Suite session use the session management feature (instead of the JAAS feature), and they may also use the logging, profiles, and Message Dictionary features of the SDK.  The session management APIs allow you to create, retrieve, validate and cancel an Oracle E-Business Suite session (ICX session) from your external application.  Session information and context can travel back and forth between Oracle E-Business Suite and your application, allowing you to share session context information across applications. Note: Generally you would use the Java Authentication and Authorization (JAAS) feature of the SDK or the session management feature, but not both together. Send us your feedback Since the Oracle E-Business Suite SDK for Java is still pretty new, we’d like to know about who is using it and what you are trying to do with it.  We’d like to get this type of information: customer name and brief use case configuration and technologies (Oracle WebLogic Server or OC4J, plain Java, ADF, SOA Suite, and so on) project status (proof of concept, development, production) any other feedback you have about the SDK You can send me your feedback directly at Sara dot Woodhull at Oracle dot com, or you can leave it in the comments below.  Please keep in mind that we cannot answer support questions, so if you are having specific issues, please log a service request with Oracle Support. Happy coding! Related Articles New Whitepaper: Extending E-Business Suite 12.1.3 using Oracle Application Express To Customize or Not to Customize? New Whitepaper: Upgrading your Customizations to Oracle E-Business Suite Release 12 ATG Live Webcast: Upgrading your EBS 11i Customizations to Release 12

    Read the article

  • Building a project in VS that depends on a static and dynamic library

    - by fg nu
    Noob noobin'. I would appreciate some very careful handholding in setting up an example in Visual Studio 2010 Professional where I am trying to build a project which links: a previously built static library, for which the VS project folder is "C:\libjohnpaul\" a previously built dynamic library, for which the VS project folder is "C:\libgeorgeringo\" These are listed as Recipes 1.11, 1.12 and 1.13 in the C++ Cookbook. The project fails to compile for me with unresolved dependencies (see details below), and I can't figure out why. Project 1: Static Library The following are the header and source files that were compiled in this project. I was able to compile this project fine in VS2010, to the named standard library "libjohnpaul.lib" which lives in the folder ("C:/libjohnpaul/Release/"). // libjohnpaul/john.hpp #ifndef JOHN_HPP_INCLUDED #define JOHN_HPP_INCLUDED void john( ); // Prints "John, " #endif // JOHN_HPP_INCLUDED // libjohnpaul/john.cpp #include <iostream> #include "john.hpp" void john( ) { std::cout << "John, "; } // libjohnpaul/paul.hpp #ifndef PAUL_HPP_INCLUDED #define PAUL_HPP_INCLUDED void paul( ); // Prints " Paul, " #endif // PAUL_HPP_INCLUDED // libjohnpaul/paul.cpp #include <iostream> #include "paul.hpp" void paul( ) { std::cout << "Paul, "; } // libjohnpaul/johnpaul.hpp #ifndef JOHNPAUL_HPP_INCLUDED #define JOHNPAUL_HPP_INCLUDED void johnpaul( ); // Prints "John, Paul, " #endif // JOHNPAUL_HPP_INCLUDED // libjohnpaul/johnpaul.cpp #include "john.hpp" #include "paul.hpp" #include "johnpaul.hpp" void johnpaul( ) { john( ); paul( ); Project 2: Dynamic Library Here are the header and source files for the second project, which also compiled fine with VS2010, and the "libgeorgeringo.dll" file lives in the directory "C:\libgeorgeringo\Debug". // libgeorgeringo/george.hpp #ifndef GEORGE_HPP_INCLUDED #define GEORGE_HPP_INCLUDED void george( ); // Prints "George, " #endif // GEORGE_HPP_INCLUDED // libgeorgeringo/george.cpp #include <iostream> #include "george.hpp" void george( ) { std::cout << "George, "; } // libgeorgeringo/ringo.hpp #ifndef RINGO_HPP_INCLUDED #define RINGO_HPP_INCLUDED void ringo( ); // Prints "and Ringo\n" #endif // RINGO_HPP_INCLUDED // libgeorgeringo/ringo.cpp #include <iostream> #include "ringo.hpp" void ringo( ) { std::cout << "and Ringo\n"; } // libgeorgeringo/georgeringo.hpp #ifndef GEORGERINGO_HPP_INCLUDED #define GEORGERINGO_HPP_INCLUDED // define GEORGERINGO_DLL when building libgerogreringo.dll # if defined(_WIN32) && !defined(__GNUC__) # ifdef GEORGERINGO_DLL # define GEORGERINGO_DECL _ _declspec(dllexport) # else # define GEORGERINGO_DECL _ _declspec(dllimport) # endif # endif // WIN32 #ifndef GEORGERINGO_DECL # define GEORGERINGO_DECL #endif // Prints "George, and Ringo\n" #ifdef __MWERKS__ # pragma export on #endif GEORGERINGO_DECL void georgeringo( ); #ifdef __MWERKS__ # pragma export off #endif #endif // GEORGERINGO_HPP_INCLUDED // libgeorgeringo/ georgeringo.cpp #include "george.hpp" #include "ringo.hpp" #include "georgeringo.hpp" void georgeringo( ) { george( ); ringo( ); } Project 3: Executable that depends on the previous libraries Lastly, I try to link the aforecompiled static and dynamic libraries into one project called "helloBeatlesII" which has the project directory "C:\helloBeatlesII" (note that this directory does not nest the other project directories). The linking process that I did is described below: To the "helloBeatlesII" solution, I added the solutions "libjohnpaul" and "libgeorgeringo"; then I changed the properties of the "helloBeatlesII" project to additionally point to the include directories of the other two projects on which it depends ("C:\libgeorgeringo\libgeorgeringo" & "C:\libjohnpaul\libjohnpaul"); added "libgeorgeringo" and "libjohnpaul" to the project dependencies of the "helloBeatlesII" project and made sure that the "helloBeatlesII" project was built last. Trying to compile this project gives me the following unsuccessful build: 1------ Build started: Project: helloBeatlesII, Configuration: Debug Win32 ------ 1Build started 10/13/2012 5:48:32 PM. 1InitializeBuildStatus: 1 Touching "Debug\helloBeatlesII.unsuccessfulbuild". 1ClCompile: 1 helloBeatles.cpp 1ManifestResourceCompile: 1 All outputs are up-to-date. 1helloBeatles.obj : error LNK2019: unresolved external symbol "void __cdecl georgeringo(void)" (?georgeringo@@YAXXZ) referenced in function _main 1helloBeatles.obj : error LNK2019: unresolved external symbol "void __cdecl johnpaul(void)" (?johnpaul@@YAXXZ) referenced in function _main 1E:\programming\cpp\vs-projects\cpp-cookbook\helloBeatlesII\Debug\helloBeatlesII.exe : fatal error LNK1120: 2 unresolved externals 1 1Build FAILED. 1 1Time Elapsed 00:00:01.34 ========== Build: 0 succeeded, 1 failed, 2 up-to-date, 0 skipped ========== At this point I decided to call in the cavalry. I am new to VS2010, so in all likelihood I am missing something straightforward.

    Read the article

  • Autoscaling in a modern world&hellip;. last chapter

    - by Steve Loethen
    As we all know as coders, things like logging are never important.  Our code will work right the first time.  So, you can understand my surprise when the first time I deployed the autoscaling worker role to the actual Azure fabric, it did not scale.  I mean, it worked on my machine.  How dare the datacenter argue with that.  So, how did I track down the problem?  (turns out, it was not so much code as lack of the right certificate)  When I ran it local in the developer fabric, I was able to see a wealth of information.  Lots of periodic status info every time the autoscalar came around to check on my rules and decide to act or not.  But that information was not making it to Azure storage.  The diagnostics were not being transferred to where I could easily see and use them to track down why things were not being cooperative.  After a bit of digging, I discover the problem.  You need to add a bit of extra configuration code to get the correct information stored for you.  I added the following to my app.config: Code Snippet <system.diagnostics>     <sources>         <source name="Autoscaling General"switchName="SourceSwitch"           switchType="System.Diagnostics.SourceSwitch" >         <listeners>           <add name="AzureDiag" />             <remove name="Default"/>         </listeners>       </source>         <source name="Autoscaling Updates"switchName="SourceSwitch"           switchType="System.Diagnostics.SourceSwitch" >         <listeners>           <add name="AzureDiag" />             <remove name="Default"/>         </listeners>       </source>     </sources>     <switches>       <add name="SourceSwitch"           value="Verbose, Information, Warning, Error, Critical" />     </switches>     <sharedListeners>       <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener,Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35" name="AzureDiag"/>     </sharedListeners>     <trace>       <listeners>         <add             type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener,Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35" name="AzureDiagnostics">           <filter type="" />         </add>       </listeners>     </trace>   </system.diagnostics> Suddenly all the rich tracing info I needed was filling up my storage account.  After a few cycles of trying to attempting to scale, I identified the cert problem, uploaded a correct certificate, and away it went.  I hope this was helpful.

    Read the article

  • IIS Logfile Visualization with XNA

    - by BobPalmer
    In my office, I have a wall mounted monitor who's whole purpose in life is to display perfmon stats from our various servers.  And on a fairly regular basis, I have folks walk by asking what the lines mean.    After providing the requisite explaination about CPU utilization, disk I/O bottlenecks, etc. this is usually followed by some blank stares from the user in question, and a distillation of all of our engineering wizardry down to the phrase 'So when the red line goes up that's bad then?'   This of course would not do.  So I talked to my friends and our network admin about an option to show something more eye catching and visual, with which we could catch at a glance a feel for what was up with our site.    He initially pointed me out to a video showing GLTail and Chipmunk done in Ruby.  Realizing this was both awesome, and that I needed an excuse to do something in XNA, I decided to knock out a proof of concept for something very similar, but with a few tweaks.   Here's a link to a video of the current prototype:   http://www.youtube.com/watch?v=jM_PWZbtH2I   Essentially this app opens up a log file (even an active one) and begins pulling out the lines of text.  (Here's a good Code Project link that covers how to do tail reading from an active text file: http://www.codeproject.com/KB/files/tail.aspx).   As new data is added, a bubble is generated in the application - a GET statement comes from the left, and a POST from the right.  I then run it through a series of expression checkers, and based on the kind of statement and the pattern, a bubble of an appropriate color is generated.   For example, if I get a 500, a huge red bubble pops out.  Others are based on the part of the system the page is from - i.e. green bubbles are from our claims management subsystem, and blue bubbles are from the pages our scheduling staff use to schedule patients.  Others include the purple bubbles for security and login, and yellow bubbles for some miscellaneous pages.   The little grey bubbles represent things like images, JS, CSS, etc - and their small size makes them work like grease to keep the larger page bubbles moving.   The app is also smart enough that if it is starting to bog down with handling the physics and interactions, it will suspend new bubbles until enough have dropped off that performance can resume (you can see this slight stuttering in the sample video).   The net result is that anyone will be able to look up on the wall monitor, and instantly get a quick feel for how things are going on the floor.  Website slow?  You can get a feel for both volume and utilized modules with one glance.  Website crashing?  Look for a wall of giant red bubbles.  No activity at all?  Maybe the site is down.  Now couple this with utilization within a farm, and cross referenced with a second app showing the same kind of data from your SQL database...   As for the app itself, it's a windows XNA project with the code in C#.   The physics are handled by the Farseer physicis eingine for XNA (http://www.codeplex.com/FarseerPhysics) which is just pure goodness.  The samples are great, and I had the app up and working in two evenings (half of that was fine tuning, and the other was me coding with a kid in my lap).   My next steps include wiring this to SQL (I have some ideas...), and adding a nice configuration module.  For example, you could use polygons, etc to tie to your regex - or more entertaining things like having a little human ragdoll to represent a user login.     Once that's wrapped up and I have a chance to complete some hardening, I will be releasing the whole thing into the wild as opensource.     Feel free to ping me if you have any questions! -Bob

    Read the article

  • Introducing Oracle System Assistant

    - by B.Koch
    by Josh Rosen One of the challenges with today's servers is getting the server up and running and understanding what all of the steps are once you plug the server in for the first time. So many different pieces come into play: installing drivers, updating firmware, configuring RAID, and provisioning the operating system. All of these steps must be done before you can even start using the server. Finding the latest firmware and drivers, making sure you have the right versions, and knowing that all the different software and firmware components work together properly can be a real challenge. If not done correctly, such as if you separately downloading disk firmware or controller firmware that doesn't match the existing OS drivers, you could experience bugs, performance problems, and incompatibilities. Gone are the days of having to locate the tools and drivers media that shipped with the server only to find out that newer versions of software and firmware are available on the web. Oracle has solved these challenges in the new X3-2 family of servers by introducing Oracle System Assistant. Oracle System Assistant is an innovative tool that is built-in to every new x86 server. It provides step-by-step assistance with configuring the server, updating firmware and drivers, and provisioning the operating system. Once you have completed all of the steps in the Oracle System Assistant tool, the server is ready to use. Oracle System Assistant was designed to be easy and straightforward. Starting it is as simple as pressing F9 when the server is booting. You'll need a keyboard, monitor, and mouse or you can use the remote console feature of Oracle ILOM (Integrated Lights Out Manager) to access a virtual KVM to the server from any machine. From there Oracle System Assistant will walk you through each of the steps necessary to set up your server. After configuring the network settings for Oracle System Assistant, the next step is to check for any new software or firmware for the server. Oracle System Assistant connects back to Oracle using your My Oracle Support account and downloads any updates that were made available to you for this specific server. This is where you really start to see the innovation that went into Oracle System Assistant. Firmware for Oracle ILOM and BIOS, operating system drivers, and other system firmware (including for option cards and disk drivers) come as a single bundle, downloading as a single unit, that has been engineered and tested to work together by Oracle. Oracle System Assistant figures out the right combination for your server, so you don't have to. Now that the server has the latest firmware, Oracle System Assistant will next walk you through configuring the hardware. From Oracle System Assistant, you can configure many Oracle ILOM settings, including the network settings and initial user accounts. This ensures that ILOM is accessible and ready to use. Oracle System Assistant is where all parts of the server come together. In addition to communicating with Oracle ILOM and interacting with BIOS, Oracle System Assistant understands and can configure the storage subsystem. Before installing the operating system, Oracle System Assistant can detect the storage configuration and configure RAID for all disks in the system. At this point, the server is ready to be provisioned with the host operating system. You can use Oracle System Assistant to provision a supported OS, including Oracle Linux, Oracle VM, RHEL, SuSe Linux, and Windows. And by using Oracle System Assistant, you can be sure that the proper OS drivers are installed for each of the installed hardware components. With Oracle System Assistant, initial setup of the server has never been easier. If we can innovate around problems and find solutions to make our servers easier to manage, this reduces IT costs and makes managing servers simpler. I think with Oracle System Assistant we have done just that. Josh Rosen is a Principal Product Manager at Oracle and previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers.

    Read the article

  • 12.04 installation started to black screen during boot today

    - by Cedric
    NOTE: Most of this question is now irrelevant. UPDATE 3 summarizes the problem as it stands. I've been running 12.04 on my Lenovo laptop for one month now (updated from 11.04), and I have not had any significant problem until today. This morning, when I boot, I pass the Grub screen, then I get to the purple loading screen with dots as usual, then for some reason I got to the terminal login, with no GUI. startx gives me a black screen. Ctrl+F7-F8 didn't help either. It's similar to: After the update today no graphical interface anymore - 12.04 I followed the instructions at the end, to flush the ATI drivers (which I had installed), and fall back to the community drivers. That made me lose the login! Now I just get a black screen after the Ubuntu loading screen. I can still access the console through recovery, and I've gotten into VESA mode once or twice (not reproducible, for some reason). I've tried various permutations of xorg.conf, without success. Xorg -configure fails for now, though I might be able to get it to work. apt-get update/upgrade doesn't improve anything either. However, both Windows and the 12.04 Live CD still work beautifully, and I know that all my data is still there. Is there any way that I could somehow take the configuration from the Live CD and roll with it? I know that I could reinstall, but that sucks, frankly, especially given that there's no straight-forward way of keeping the home (which, incidentally, is unaccessible from the Live CD) Thank you. Update: it seems that the fglrx drivers are still active, even after I've --purged them. From Xorg.0.log: [ 18.235] (WW) fglrx(0): *********************************************************** [ 18.235] (WW) fglrx(0): * DRI initialization failed * [ 18.235] (WW) fglrx(0): * kernel module (fglrx.ko) may be missing or incompatible * [ 18.235] (WW) fglrx(0): * 2D and 3D acceleration disabled * [ 18.235] (WW) fglrx(0): *********************************************************** [ 18.235] Fatal server error: [ 18.235] AddScreen/ScreenInit failed for driver 0 There's also a mention of the "fbdev" module. What is it? PARTIALLY SOLVED: I've undone the damage from the fglrx purge. I'm still mystified as to why uninstalling the packages didn't kill fglrx entirely, but I've now recovered the prompt. The solution to the DRI initialization error was to add radeon.modeset=0 to the GRUB boot options. So I'm back to being dropped to a prompt without any GUI. startx gives me a bunch of messages, though no obvious errors. I have little reason to suspect the video drivers, as they worked fine before today. There is no apparent error message in any of the log files. UPDATE: When I startx, I get an error, Plymounth command failed mountall: Disconnected from Plymouth This is all over the Internet, but I have not found anything that works for me yet. UPDATE 3: If I press ESC during boot, the splash screen (Plymouth!) disappears, and I no longer have any error from Plymouth. The last error message is: Stopping mount filesystems on boot I can then Ctrl+Alt+F1 to get the TTY1, but startx still does not work. Sadly, the Internet knows nothing about this error message, and neither do I. Help!

    Read the article

  • How to create a Global Rule that stores a document’s folder path in a custom metadata field

    - by Nicolas Montoya
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} How to create a Global Rule that stores a document’s folder path in a custom metadata field Efficiency purists would argue that redundancy is not necessary. In real life, we are willing to pay a price for performance –i.e. to have information at our fingertips. We have run into customers opting to store a document folder path as a document metadata field. They have their reasons, half of the ECM community will agree with them, and the other half would raise an eye brow. In the end, they are getting creative to achieve their document management goals. The below steps outlines how to create a Global Rule that would store a document’s folder path in a custom metadata field: Create a Global Rule via Configuration Manager > Rules Tab > Add Then check “Is global rule with priority”. Then check “Use rule activation condition”. The go to “Edit” and check the actions for this Script Properties: Then click OK, and the following rule activation condition will appear: Then Goto to the Fields Tab and add a Rule Field: Select the target Custom Metadata Field and click Ok, then check the “Is derived field”, then “Edit”, then go to the Custom Tab in the Script Properties window and enter the below custom script: <$if #active.dCollectionPath$> <$dprDerivedValue=#active.dCollectionPath$> <$else$> <$dprDerivedValue=#active.xCollectionIDPath$> <$endif$> For more information on the dCollectionPath property, check Section 8.2 Folder Services from the Oracle® Fusion Middleware Services Reference Guide for Oracle Universal Content Management 11g Release 1 (11.1.1) http://docs.oracle.com/cd/E21043_01/doc.1111/e11011/c08_folders002.htm The above rule will keep the Custom Metadata Field updated with the Folder Path information when a document is checked in via the Content Server (CS) Web Interface or the Desktop Integration Suite (DIS).

    Read the article

  • Unity 3d support for multiple X-screens

    - by stewbond
    I've installed Ubuntu 12.04 this weekend and have had problems getting Unity3d to work with my triple monitor set-up. I've installed the latest nVidia drivers for my 2 nVidia video cards and have used NVIDIA X Server Settings to configure everything. If I stick each monitor on a separate X screen, all but the primary X screen will appear white and the cursor will be a black X. I can start a terminal on this screen but cannot drag other windows to it. If I kill the "Nautilus" process, the white background disappears and I see my desktop background, but my cursor is still an X and I still cannot drag windows to it. If I enable TwinView, I can get one screen on two monitors to work properly, but the white screen remains on my third monitor. In addition, I don't like using TwinView because my full-screen applications get stretched. My current solution is to "Enable Xinerama", use all separate X screens and revert to Unity2d. I'd love a solution to this as Ubuntu 12.10 is not planned to support Unity-2d and prefer the aesthetics of 3d anyways. I can provide Xorg.conf for all configurations and any other suggested diagnostic information. Below is my closest-to-working xorg.conf: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 295.33 (buildd@zirconium) Fri Mar 30 13:38:49 UTC 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" RightOf "Screen2" Screen 1 "Screen1" RightOf "Screen0" Screen 2 "Screen2" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "1" EndSection Section "Files" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Microvitec PLC MV191" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor1" VendorName "Unknown" ModelName "CRT-1" HorizSync 28.0 - 55.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor2" VendorName "Unknown" ModelName "Microvitec PLC MV191" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor3" VendorName "Unknown" ModelName "Microvitec PLC MV191" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 75.0 EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTS 450" BusID "PCI:1:0:0" Screen 0 EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 9500 GT" BusID "PCI:2:0:0" EndSection Section "Device" Identifier "Device2" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTS 450" BusID "PCI:1:0:0" Screen 1 EndSection Section "Device" Identifier "Device3" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTS 450" BusID "PCI:1:0:0" Screen 2 EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "DFP-0: 1280x1024_75 +0+0; DFP-0: 1280x1024 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen2" Device "Device2" Monitor "Monitor2" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "DFP-2: 1280x1024_75 +0+0; DFP-2: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen3" Device "Device3" Monitor "Monitor3" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "DFP-2: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Extensions" Option "Composite" "Disable" EndSection Other people have had similar issues. I haven't been successful at any of the few suggestions submitted. Can not get Dual Monitors to work on Different GPUs http://askubuntu.com/questions/30412/3-monitors-with-2-video-cards-not-working?rq=1 How to get second display to work alongside primary display? http://askubuntu.com/questions/148007/multiple-monitors-only-work-with-unity-2d/201086#201086 xorg.conf and Unity3D?

    Read the article

  • SPARC T5-8 Servers EMEA Acceleration Promotion for Partners

    - by mseika
    Dear all We are pleased to announce the EMEA T5-8 Acceleration Promotion, a price promotion that, for a limited time, makes the T5-8 server available to our EMEA partners at a very attractive discount. Why the SPARC T5-8 server Oracle's SPARC servers running Oracle Solaris are ideal for mission-critical applications requiring high performance, best-in-class availability, and unmatched scalability on all application tiers. SPARC servers include built-in virtualization, systems management, and security at no additional cost. Designed for applications that demand the highest performance and 24x7 availability. Oracle's SPARC T5-8 server is the fastest and the most advanced, scalable midrange server in the Oracle portfolio. The Oracle SPARC T5-8 server is in the sweet spot of the UNIX midrange, and directly competing with IBM P770(+) and P780(+) systems, with a 7x price advantage (see official Oracle press release) over a similarly configured P780 system! What are we offering Effective immediately, the fully-configured T5-8 server is available to VADs with a 38% discount off price list: this is 8 additional points on top of the standard 30% contractual discount. The promo will be communicated to VADs and VARs, and VADs are expected to pass the additional discount through to the VARs. Resellers will be encouraged to use this attractive price to position T5-8 versus the competition, accelerate T5-8 sales, and use the increased margin to offer additional services to their end users - thus expanding their footprint within their customers and making the T5-8 business proposition even more compelling. This is a unique opportunity for partners to expand their base and beat the competition with a 7x price advantage over a similarly configured IBM P780. This price promotion is only available to OPN Partners, and is valid until November 30, 2013. What's in it for Partners  More competitive price More customer budget available for more projects: attach migration services, training, ... Opportunity to attach Storage, and additional Software Higher win rate Additional Details The promotion is valid for the existing configurations of T5-8 with 8 CPU and different memory configurations, including all X-options that are part of the system and ordered at the same time. 8% additional discount to the VAD on full T5-8 - Including X-Options: Cat V (30% + 8% additional): System, CPU, Memory, Disks, Ethernet Cat U (22% + 8% additional): Infiniband HCA Cat W (30% + 8% additional): FC/SAS HBA / FCoE CNA Partner eligibilty criteria Standard requirements apply. Partners must: be an OPN member in good standing, at Gold level or above meet the Resale criteria in the SPARC T-Series servers Knowledge Zone have a right to distribute hardware via the Full Use Distribution Agreement, with Hardware Addendum if applicable. Order process The promotion is available until November 30, 2013. VADs place the order via Oracle Partner Store. A request for extra-discount has to be raised in advance using the standard process for available configs: input the configuration apply the suggested discounts submit the request in the request documentation, please refer to EMEA T5-8 FY14H1 Channel Promotion as approved in GDMT GT-EB2-Q413-107C This promotion is only valid for the T5-8 configurations stated in this announcement. Any change, or additional products / items not listed explicitly, can be ordered at the same time and will follow standard approval process. Key contacts Your local A&C organization For questions on EMEA Partner Programs for Servers: Giuseppe Facchetti For questions on the T5-8 product: Martin de Jong Best regards, Olivier Tordo Senior Director, Sales & Strategy, Hardware SolutionsEMEA Alliances & Channels Paul Flannery Senior Director, EMEA Servers Product Management

    Read the article

  • SPARC T5-8 Servers EMEA Acceleration Promotion for Partners

    - by mseika
    Dear all We are pleased to announce the EMEA T5-8 Acceleration Promotion, a price promotion that, for a limited time, makes the T5-8 server available to our EMEA partners at a very attractive discount. Why the SPARC T5-8 server Oracle's SPARC servers running Oracle Solaris are ideal for mission-critical applications requiring high performance, best-in-class availability, and unmatched scalability on all application tiers. SPARC servers include built-in virtualization, systems management, and security at no additional cost. Designed for applications that demand the highest performance and 24x7 availability. Oracle's SPARC T5-8 server is the fastest and the most advanced, scalable midrange server in the Oracle portfolio. The Oracle SPARC T5-8 server is in the sweet spot of the UNIX midrange, and directly competing with IBM P770(+) and P780(+) systems, with a 7x price advantage (see official Oracle press release) over a similarly configured P780 system! What are we offering Effective immediately, the fully-configured T5-8 server is available to VADs with a 38% discount off price list: this is 8 additional points on top of the standard 30% contractual discount. The promo will be communicated to VADs and VARs, and VADs are expected to pass the additional discount through to the VARs. Resellers will be encouraged to use this attractive price to position T5-8 versus the competition, accelerate T5-8 sales, and use the increased margin to offer additional services to their end users - thus expanding their footprint within their customers and making the T5-8 business proposition even more compelling. This is a unique opportunity for partners to expand their base and beat the competition with a 7x price advantage over a similarly configured IBM P780. This price promotion is only available to OPN Partners, and is valid until November 30, 2013. What's in it for Partners  More competitive price More customer budget available for more projects: attach migration services, training, ... Opportunity to attach Storage, and additional Software Higher win rate Additional Details The promotion is valid for the existing configurations of T5-8 with 8 CPU and different memory configurations, including all X-options that are part of the system and ordered at the same time. 8% additional discount to the VAD on full T5-8 - Including X-Options: Cat V (30% + 8% additional): System, CPU, Memory, Disks, Ethernet Cat U (22% + 8% additional): Infiniband HCA Cat W (30% + 8% additional): FC/SAS HBA / FCoE CNA Partner eligibilty criteria Standard requirements apply. Partners must: be an OPN member in good standing, at Gold level or above meet the Resale criteria in the SPARC T-Series servers Knowledge Zone have a right to distribute hardware via the Full Use Distribution Agreement, with Hardware Addendum if applicable. Order process The promotion is available until November 30, 2013. VADs place the order via Oracle Partner Store. A request for extra-discount has to be raised in advance using the standard process for available configs: input the configuration apply the suggested discounts submit the request in the request documentation, please refer to EMEA T5-8 FY14H1 Channel Promotion as approved in GDMT GT-EB2-Q413-107C This promotion is only valid for the T5-8 configurations stated in this announcement. Any change, or additional products / items not listed explicitly, can be ordered at the same time and will follow standard approval process. Key contacts Your local A&C organization For questions on EMEA Partner Programs for Servers: Giuseppe Facchetti For questions on the T5-8 product: Martin de Jong Best regards, Olivier Tordo Senior Director, Sales & Strategy, Hardware SolutionsEMEA Alliances & Channels Paul Flannery Senior Director, EMEA Servers Product Management

    Read the article

  • SPARC T5-8 Servers EMEA Acceleration Promotion for Partners

    - by mseika
    Dear all We are pleased to announce the EMEA T5-8 Acceleration Promotion, a price promotion that, for a limited time, makes the T5-8 server available to our EMEA partners at a very attractive discount. Why the SPARC T5-8 server Oracle's SPARC servers running Oracle Solaris are ideal for mission-critical applications requiring high performance, best-in-class availability, and unmatched scalability on all application tiers. SPARC servers include built-in virtualization, systems management, and security at no additional cost. Designed for applications that demand the highest performance and 24x7 availability. Oracle's SPARC T5-8 server is the fastest and the most advanced, scalable midrange server in the Oracle portfolio. The Oracle SPARC T5-8 server is in the sweet spot of the UNIX midrange, and directly competing with IBM P770(+) and P780(+) systems, with a 7x price advantage (see official Oracle press release) over a similarly configured P780 system! What are we offering Effective immediately, the fully-configured T5-8 server is available to VADs with a 38% discount off price list: this is 8 additional points on top of the standard 30% contractual discount. The promo will be communicated to VADs and VARs, and VADs are expected to pass the additional discount through to the VARs. Resellers will be encouraged to use this attractive price to position T5-8 versus the competition, accelerate T5-8 sales, and use the increased margin to offer additional services to their end users - thus expanding their footprint within their customers and making the T5-8 business proposition even more compelling. This is a unique opportunity for partners to expand their base and beat the competition with a 7x price advantage over a similarly configured IBM P780. This price promotion is only available to OPN Partners, and is valid until November 30, 2013. What's in it for Partners  More competitive price More customer budget available for more projects: attach migration services, training, ... Opportunity to attach Storage, and additional Software Higher win rate Additional Details The promotion is valid for the existing configurations of T5-8 with 8 CPU and different memory configurations, including all X-options that are part of the system and ordered at the same time. 8% additional discount to the VAD on full T5-8 - Including X-Options: Cat V (30% + 8% additional): System, CPU, Memory, Disks, Ethernet Cat U (22% + 8% additional): Infiniband HCA Cat W (30% + 8% additional): FC/SAS HBA / FCoE CNA Partner eligibilty criteria Standard requirements apply. Partners must: be an OPN member in good standing, at Gold level or above meet the Resale criteria in the SPARC T-Series servers Knowledge Zone have a right to distribute hardware via the Full Use Distribution Agreement, with Hardware Addendum if applicable. Order process The promotion is available until November 30, 2013. VADs place the order via Oracle Partner Store. A request for extra-discount has to be raised in advance using the standard process for available configs: input the configuration apply the suggested discounts submit the request in the request documentation, please refer to EMEA T5-8 FY14H1 Channel Promotion as approved in GDMT GT-EB2-Q413-107C This promotion is only valid for the T5-8 configurations stated in this announcement. Any change, or additional products / items not listed explicitly, can be ordered at the same time and will follow standard approval process. Key contacts Your local A&C organization For questions on EMEA Partner Programs for Servers: Giuseppe Facchetti For questions on the T5-8 product: Martin de Jong Best regards, Olivier Tordo Senior Director, Sales & Strategy, Hardware SolutionsEMEA Alliances & Channels Paul Flannery Senior Director, EMEA Servers Product Management

    Read the article

  • SPARC T5-8 Servers EMEA Acceleration Promotion for Partners

    - by mseika
    Dear all We are pleased to announce the EMEA T5-8 Acceleration Promotion, a price promotion that, for a limited time, makes the T5-8 server available to our EMEA partners at a very attractive discount. Why the SPARC T5-8 server Oracle's SPARC servers running Oracle Solaris are ideal for mission-critical applications requiring high performance, best-in-class availability, and unmatched scalability on all application tiers. SPARC servers include built-in virtualization, systems management, and security at no additional cost. Designed for applications that demand the highest performance and 24x7 availability. Oracle's SPARC T5-8 server is the fastest and the most advanced, scalable midrange server in the Oracle portfolio. The Oracle SPARC T5-8 server is in the sweet spot of the UNIX midrange, and directly competing with IBM P770(+) and P780(+) systems, with a 7x price advantage (see official Oracle press release) over a similarly configured P780 system! What are we offering Effective immediately, the fully-configured T5-8 server is available to VADs with a 38% discount off price list: this is 8 additional points on top of the standard 30% contractual discount. The promo will be communicated to VADs and VARs, and VADs are expected to pass the additional discount through to the VARs. Resellers will be encouraged to use this attractive price to position T5-8 versus the competition, accelerate T5-8 sales, and use the increased margin to offer additional services to their end users - thus expanding their footprint within their customers and making the T5-8 business proposition even more compelling. This is a unique opportunity for partners to expand their base and beat the competition with a 7x price advantage over a similarly configured IBM P780. This price promotion is only available to OPN Partners, and is valid until November 30, 2013. What's in it for Partners  More competitive price More customer budget available for more projects: attach migration services, training, ... Opportunity to attach Storage, and additional Software Higher win rate Additional Details The promotion is valid for the existing configurations of T5-8 with 8 CPU and different memory configurations, including all X-options that are part of the system and ordered at the same time. 8% additional discount to the VAD on full T5-8 - Including X-Options: Cat V (30% + 8% additional): System, CPU, Memory, Disks, Ethernet Cat U (22% + 8% additional): Infiniband HCA Cat W (30% + 8% additional): FC/SAS HBA / FCoE CNA Partner eligibilty criteria Standard requirements apply. Partners must: be an OPN member in good standing, at Gold level or above meet the Resale criteria in the SPARC T-Series servers Knowledge Zone have a right to distribute hardware via the Full Use Distribution Agreement, with Hardware Addendum if applicable. Order process The promotion is available until November 30, 2013. VADs place the order via Oracle Partner Store. A request for extra-discount has to be raised in advance using the standard process for available configs: input the configuration apply the suggested discounts submit the request in the request documentation, please refer to EMEA T5-8 FY14H1 Channel Promotion as approved in GDMT GT-EB2-Q413-107C This promotion is only valid for the T5-8 configurations stated in this announcement. Any change, or additional products / items not listed explicitly, can be ordered at the same time and will follow standard approval process. Key contacts Your local A&C organization For questions on EMEA Partner Programs for Servers: Giuseppe Facchetti For questions on the T5-8 product: Martin de Jong Best regards, Olivier Tordo Senior Director, Sales & Strategy, Hardware SolutionsEMEA Alliances & Channels Paul Flannery Senior Director, EMEA Servers Product Management

    Read the article

  • Oracle's PeopleSoft Customer Advisory Boards Convene to Discuss Roadmap at Pleasanton Campus

    - by john.webb(at)oracle.com
    Last week we hosted all of the PeopleSoft CABs (Customer Advisory Boards) at our Pleasanton Development Center to review our detailed designs for future Feature Packs, PeopleSoft 9.2, and beyond. Over 150 customers from 79 companies attended representing a variety of industries, geographies, and company sizes. The PeopleSoft team relies heavily on this group to provide key input on our roadmap for applications as well as technology direction. A good product strategy is one part well thought out idea with many handfuls of customer validation, and very often our best ideas originate from these customer discussions. While the individual CABs have frequent interactions with our teams, it's always great to have all of them in one place and in person. Our attendance was up from last year which I attribute to two things: (1) More interest as a result of PeopleSoft 9.1 upgrade; (2) An improving economy allowing for more travel. Maybe we should index the second item meeting-to-meeting and use it as a market indicator - we'll see! We kicked off the day one session with an overview of the PeopleSoft Roadmap and I outlined our strategy around Feature Packs and PeopleSoft 9.2. Given the high adoption rate of PeopleSoft 9.1 (over 4x that of 9.0 given the same time lapse since the release date), there was a lot of interest around the 9.1 Feature Packs as a vehicle for continuous value. We provided examples of our 3 central design themes: Simplicity, Productivity, and lower TCO, including those already delivered via Feature Packs in 2010. A great example of this is the Company Directory feature in PeopleSoft HCM. The configuration capabilities and the new actionable links our CAB advised us on last Spring were made available to all customers late last year. We reviewed many more future Navigation changes that will fundamentally change the way users interact with PeopleSoft. Our old friend, the menu tree, is being relegated from center stage to a bit part, with new concepts like Activity Guides, Train Stops, Related Actions, Work Centers, Collaborative Workspaces, and Secure Enterprise Search bringing users what they need in a contextual, role based manner with fewer clicks. Paco Aubrejuan, our PeopleSoft GM, and Steve Miranda, the SVP for Fusion Applications, then discussed our plans around Oracle's Application Investment Strategy.  This included our continued investment in developing both PeopleSoft and Fusion as well as the co-existence strategy with new Fusion Apps integrating to PeopleSoft Apps. Should you want to view this presentation, a recording is available. Jeff Robbins, our lead PeopleTools Strategist, provided the roadmap for PeopleTools and discussed our continuing plan to deliver annual releases to further evolve the user experience. Numerous examples were highlighted with the Navigation techniques I mentioned previously. Jeff also provided a lot of food for thought around Lifecycle Management topics and how to remain current on releases with a  lower cost of ownership. Dennis Mesler, from Boise, was the guest speaker in this slot, who spoke about the new PeopleSoft Test Framework (PTF). Regression Testing is a key cost component when product updates are applied. This new tool (which is free to all PeopleSoft customers as part of PeopleTools 8.51) provides a meta data driven approach to recording and executing test scripts. Coupled with what our Usage Monitor enables, PTF provides our customers a powerful tool to lower costs and manage product updates more efficiently and at the time of their choosing. Beyond the general session, we broke out into the individual CABs: HCM, Financials, ESA/ALM, SRM, SCM, CRM, and PeopleTools/ Technology. A day and half of very engaging discussions around our plans took place for each product pillar. More about that to follow in future posts.      We capped the first day with a reception sponsored by our partners: InfoSys, SmartERP (represented by Doris Wong), and Grey Sparling  Solutions (represented by Chris Heller and Larry Grey). Great to see these old friends actively engaged in the very busy PeopleSoft ecosystem!   Jeff Robbins previews the roadmap for PeopleTools with the PeopleSoft CAB  

    Read the article

  • Scrambling Sensitive Data in E-Business Suite Release 12 Cloned Environments

    - by Elke Phelps (Oracle Development)
    Securing the Oracle E-Business Suite includes protecting the underlying E-Business data in production and non-production databases.  While steps can be taken to provide a secure configuration to limit EBS access, a better approach to protecting non-production data is simply to scramble (mask) the data in the non-production copy.  You can use the Oracle Data Masking Pack with Oracle Enterprise Manager today to scramble sensitive data in cloned environments. Due to data dependencies, scrambling E-Business Suite data is not a trivial task.  The data needs to be scrubbed in such a way that allows the application to continue to function.  Using the Data Masking Pack in E-Business Suite environments is now easier with the release of new set of templates for E-Business Suite databases: Oracle E-Business Suite Release 12.1.3 Template for Data Masking Pack (Patch13898999) This template works with the Oracle Data Masking Pack and Oracle Enterprise Manager to obscure sensitive E-Business Suite information that is copied from production to non-production environments.  Is there a charge for this? Yes. You must purchase licenses for Oracle Enterprise Manager and the Oracle Data Masking Pack plug-in. The Oracle E-Business Suite 12.1.3 Template for the Data Masking Pack is included with the Oracle Data Masking Pack license.  You can contact your Oracle account manager for more details about licensing. What does data masking do in E-Business Suite environments? Application data masking does the following: De-identify the data:  Scramble identifiers of individuals, also known as personally identifiable information or PII.  Examples include information such as name, account, address, location, and driver's license number. Mask sensitive data:  Mask data that, if associated with personally identifiable information (PII), would cause privacy concerns.  Examples include compensation, health and employment information.   Maintain data validity:  Provide a fully functional application. How can EBS customers use data masking? The Oracle E-Business Suite Template for Data Masking Pack can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers.  The Oracle E-Business Suite Template for Data Masking Pack is applied to a non-production environment with the Enterprise Manager Grid Control Data Masking Pack.  When applied, the Oracle E-Business Suite Template for Data Masking Pack will create an irreversibly scrambled version of your production database for development and testing.   References For additional information on the Oracle E-Business Suite Template for Data Masking Pack please refer to the following: Masking Sensitive Data for Non-production Use in the Oracle Enterprise Manager Concepts 11g Using the Oracle E-Business Suite, Release 12.1.3 Template for the Data Masking Pack, Note 1437485.1 Related Articles Webcast Replay Available: E-Business Suite Data Protection Oracle E-Business Suite Plug-in 4.0 Released for OEM 11g (11.1.0.1)

    Read the article

< Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >