Search Results

Search found 10536 results on 422 pages for 'cpu usage'.

Page 141/422 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • SQL Server Contains Equivalent

    - by Derek D.
    Many Oracle developers trying to find the SQL Server function compatible with their Contains clause will most likely accidently end up on this page. Therefore, this page will be devoted to them rather than the SQL Server’s Contains function which is used for full-text searching. The most similar function to Oracle’s contains is charindex. The usage [...]

    Read the article

  • ASP.NET 4.0 and the Entity Framework 4 - Part 2: Perform CRUD Operations Using the Entity Framework

    In this article, Vince demonstrates the usage of the Entity Framework 4 to create, read, update, and delete records in the database which was created in Part 1 of this series. After a short introduction, he discusses the various step involved in the modification of the database, creation of a web form, the selection records to load a drop down list, and the adding, updating, deletion and retrieval of records from the database with the help of relevant source code and screen shots.

    Read the article

  • Correcting Grammar for Microsoft Products and Technology

    I see book authors, editors, bloggers, press, team members, and occasionally even a VP misspell our products, technologies, and features that I thought I would build and maintain a list of the correct capitalization and spelling of the most commonly misspelled Microsoft products and technologies. Sources: Internal site (brandtools) and the Microsoft Trademarks Web site. Last updated: April 27, 2010   Incorrect Correct .net or .Net .NET .Net framework 4.0, .NET framework 4.0 .NET Framework AdCenter, Ad Center, Adcenter adCenter Ado.net, ADO.Net ADO.NET Asp.net, ASP.Net ASP.NET Asp.Net ajax, Asp.NET Ajax ASP.NET AJAX Asp.Net Mvc ASP.NET MVC Biz Spark, Bizspark BizSpark Clear Type, Clear type, Cleartype ClearType Directaccess, Direct Access DirectAccess Direct Show, Directshow DirectShow Direct X DirectX Dream Spark, Dreamspark DreamSpark Home Group, Home group HomeGroup HotMail, Hot Mail Hotmail Info Path, Infopath InfoPath Intellisense, Intellisense IntelliSense Iron Ruby IronRuby Kin KIN Linq LINQ MSN Messenger Windows Live Messenger One Note, Onenote OneNote Open type, Opentype OpenType PlayTo, Play to Play To Power Point, Powerpoint PowerPoint Powershell, Power Shell PowerShell Sea Dragon, Seadragon SeaDragon Sharepoint, Share Point SharePoint Silver Light, SilverLight Silverlight Skydrive, Sky Drive SkyDrive Sql Server SQL Server Visual Basic .net (the .net was removed in the 2005 version) Visual Basic  Visual C# Express 2010 or Visual Basic Express 2010 or Visual C++ Express 2010 Visual version 2010 Express as in Visual C# 2010 Express, Visual Basic 2010 Express Visual Studio 2010 Team Foundation Server Visual Studio Team Foundation Server 2010 Visual Studio Ultimate 2010 or Visual Studio Professional 2010 Visual Studio 2010 version, as in Visual Studio 2010 Ultimate, Visual Studio 2010 Professional WebSite Spark, Website spark Website Spark Win 32 Win32 Windows Mobile (except when referring to previous versions like 5.0 or 6), Windows phone 7 Series Windows Phone Xaml XAML XBOX, xbox Xbox Xbox Live, XBOX Live Xbox LIVE   Caveats These guidelines dont apply to URLs (ex: www.asp.net) or to code namespaces, variables, and classes should follow the .NET Framework naming guidelines. This list only covers capitalization/spacing rules, it doesnt cover the correct usage of (tm) or symbols or the correct word usage rules. For those, refer to the trademark Web site. Also note that I have no idea why we are so inconsistent say on keeping features/brands two words versus one word or the order of product/version/year.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Please help me give this principle a name

    - by Brent Arias
    As a designer, I like providing interfaces that cater to a power/simplicity balance. For example, I think the LINQ designers followed that principle because they offered both dot-notation and query-notation. The first is more powerful, but the second is easier to read and follow. If you disagree with my assessment of LINQ, please try to see my point anyway; LINQ was just an example, my post is not about LINQ. I call this principle "dial-able power". But I'd like to know what other people call it. Certainly some will say "KISS" is the common term. But I see KISS as a superset, or a "consumerism" practice. Using LINQ as my example again, in my view, a team of programmers who always try to use query notation over dot-notation are practicing KISS. Thus the LINQ designers practiced "dial-able power", whereas the LINQ consumers practice KISS. The two make beautiful music together. I'll give another example. Imagine a C# logging tool that has two signatures allowing two uses: void Write(string message); void Write(Func<string> messageCallback); The purpose of the two signatures is to fulfill these needs: //Every-day "simple" usage, nothing special. myLogger.Write("Something Happened" + error.ToString() ); //This is performance critical, do not call ToString() if logging is //disabled. myLogger.Write( () => { "Something Happened" + error.ToString() }); Having these overloads represents "dial-able power," because the consumer has the choice of a simple interface or a powerful interface. A KISS-loving consumer will use the simpler signature most of the time, and will allow the "busy" looking signature when the power is needed. This also helps self-documentation, because usage of the powerful signature tells the reader that the code is performance critical. If the logger had only the powerful signature, then there would be no "dial-able power." So this comes full-circle. I'm happy to keep my own "dial-able power" coinage if none yet exists, but I can't help think I'm missing an obvious designation for this practice. p.s. Another example that is related, but is not the same as "dial-able power", is Scott Meyer's principle "make interfaces easy to use correctly, and hard to use incorrectly."

    Read the article

  • Retrieving system information without WMI

    - by user94481
    I want to write an application where I can fetch system information like CPU-Z (for example) does. I don't want to rely on WMI, because I want to grab stuff like information about the manufacturing process of the GPU (like from a database) and I don't want to maintain this by myself, because that would require too much effort. I already came up with HWiNFO32 SDK but I wonder if there are any (maybe free) alternatives to it?

    Read the article

  • This Wed, Reading - Service Broker, Indexing, Normalisation, Sets, RI and Locking, Surrogate Keys

    - by tonyrogerson
    Registration is a must so we know numbers and for security, register here: http://sqlserverfaq.com/events/213/Service-Broker-Intro-Guidance-Indexing-Selection-Usage-Fragmentation-etc-Normalisation-Surrogate-Keys-Locking-considerations.aspx Network, learn, ask a question, meet other folk, get fed - these are all things that happen at user group events. These events are a really great opportunity to socialise in an informal learning experience - if you want your own exposure then come and do a 1 -...(read more)

    Read the article

  • The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume

    - by Jason Fitzpatrick
    Last week we showed you how to set up a simple, but strongly encrypted, TrueCrypt volume to help you protect your sensitive data. This week we’re digging in deeper and showing you how to hide your encrypted data within your encrypted data. The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos

    Read the article

  • ASP.NET 4.0 and the Entity Framework 4 - Part 5 - Using the GridView and the EntityDataSource

    In this article, Vince demonstrates the usage of the GridView control to view, add, update, and delete records using the Entity Framework 4. After providing a short introduction, he provides the steps required to create a web site, entity data model, web form and template fields with the help of relevant source code and screenshots.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SPARC Power Management Article at OTN

    - by nospam(at)example.com (Joerg Moellenkamp)
    My colleague Karoly Vegh pointed in a tweet to a really interesting article about the usage of Power Management of SPARC T-series systems. The article explains how to use the power management, how it works, what it's able to do and how to use it in a dynamic fashion according to anticipated load patterns. You find the article "How to Use the Power Management Controls on SPARC Servers" written by by Bruce Evans, Julia Harper, and Terry Whatley on OTN.

    Read the article

  • What guidelines do you suggest for using Objective-C Properties?

    - by adarsha
    Objective-C 2.0 introduced properties. While I personally think properties are nice addition to the language, I have seen a trend of making every instance variable as a property. Apple sample codes are no exceptions to this. I believe this is against the spirit of OOP, and since it exposes a lot more implementation details of a class to the client than they need to know. What guidelines do you suggest for the proper usage properties in Objective C?

    Read the article

  • Finding co-maintainers for open source projects

    - by Mike Samuel
    I have a number of open-source projects that have gotten some significant usage and would like to find co-maintainers so that I am not a bottleneck when it comes to maintenance and support requests and to get other perspectives on how the project should evolve. Where should I look for co-maintainers, what should I look for in a co-maintainer, and how should I go about bringing them up to speed on the code and maintainer responsibilities?

    Read the article

  • Enterprise vs Real time embedded systems

    - by JakeFisher
    In university I have 2 options for software architecture: Enterprise Real time embedded systems I would be very glad if someone can give me a brief explanation of what those are. I am interested in following criterias: Brief overview Complexity and interest. So does knowledge costs time? Area of usage Profit(salary) Working tools, programs. Might be some text editor, uml editor. Something else?

    Read the article

  • SQL SERVER – Introduction to Function SIGN

    - by pinaldave
    Yesterday I received an email from a friend asking how do SIGN function works. Well SIGN Function is very fundamental function. It will return the value 1, -1 or 0. If your value is negative it will return you negative -1 and if it is positive it will return you positive +1. Let us start with a simple small example. DECLARE @IntVal1 INT, @IntVal2 INT,@IntVal3 INT DECLARE @NumVal1 DECIMAL(4,2), @NumVal2 DECIMAL(4,2),@NumVal3 DECIMAL(4,2) SET @IntVal1 = 9; SET @IntVal2 = -9; SET @IntVal3 = 0; SET @NumVal1 = 9.0; SET @NumVal2 = -9.0; SET @NumVal3 = 0.0; SELECT SIGN(@IntVal1) IntVal1,SIGN(@IntVal2) IntVal2,SIGN(@IntVal3) IntVal3 SELECT SIGN(@NumVal1) NumVal1,SIGN(@NumVal2) NumVal2,SIGN(@NumVal2) NumVal3   The above function will give us following result set. You will notice that when there is positive value the function gives positive values and if the values are negative it will return you negative values. Also you will notice that if the data type is  INT the return value is INT and when the value passed to the function is Numeric the result also matches it. Not every datatype is compatible with this function.  Here is the quick look up of the return types. bigint -> bigint int/smallint/tinyint -> int money/smallmoney -> money numeric/decimal -> numeric/decimal everybody else -> float What will be the best example of the usage of this function that you will not have to use the CASE Statement. Here is example of CASE Statement usage and the same replaced with SIGN function. USE tempdb GO CREATE TABLE TestTable (Date1 SMALLDATETIME, Date2 SMALLDATETIME) INSERT INTO TestTable (Date1, Date2) SELECT '2012-06-22 16:15', '2012-06-20 16:15' UNION ALL SELECT '2012-06-24 16:15', '2012-06-22 16:15' UNION ALL SELECT '2012-06-22 16:15', '2012-06-22 16:15' GO -- Using Case Statement SELECT CASE WHEN DATEDIFF(d,Date1,Date2) > 0 THEN 1 WHEN DATEDIFF(d,Date1,Date2) < 0 THEN -1 ELSE 0 END AS Col FROM TestTable GO -- Using SIGN Function SELECT SIGN(DATEDIFF(d,Date1,Date2)) AS Col FROM TestTable GO DROP TABLE TestTable GO This was interesting blog post for me to write. Let me know your opinion. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Per-vertex position/normal and per-index texture coordinate

    - by Boreal
    In my game, I have a mesh with a vertex buffer and index buffer up and running. The vertex buffer stores a Vector3 for the position and a Vector2 for the UV coordinate for each vertex. The index buffer is a list of ushorts. It works well, but I want to be able to use 3 discrete texture coordinates per triangle. I assume I have to create another vertex buffer, but how do I even use it? Here is my vertex/index buffer creation code: // vertices is a Vertex[] // indices is a ushort[] // VertexDefs stores the vertex size (sizeof(float) * 5) // vertex data numVertices = vertices.Length; DataStream data = new DataStream(VertexDefs.size * numVertices, true, true); data.WriteRange<Vertex>(vertices); data.Position = 0; // vertex buffer parameters BufferDescription vbDesc = new BufferDescription() { BindFlags = BindFlags.VertexBuffer, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None, SizeInBytes = VertexDefs.size * numVertices, StructureByteStride = VertexDefs.size, Usage = ResourceUsage.Default }; // create vertex buffer vertexBuffer = new Buffer(Graphics.device, data, vbDesc); vertexBufferBinding = new VertexBufferBinding(vertexBuffer, VertexDefs.size, 0); data.Dispose(); // index data numIndices = indices.Length; data = new DataStream(sizeof(ushort) * numIndices, true, true); data.WriteRange<ushort>(indices); data.Position = 0; // index buffer parameters BufferDescription ibDesc = new BufferDescription() { BindFlags = BindFlags.IndexBuffer, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None, SizeInBytes = sizeof(ushort) * numIndices, StructureByteStride = sizeof(ushort), Usage = ResourceUsage.Default }; // create index buffer indexBuffer = new Buffer(Graphics.device, data, ibDesc); data.Dispose(); Engine.Log(MessageType.Success, string.Format("Mesh created with {0} vertices and {1} indices", numVertices, numIndices)); And my drawing code: // ShaderEffect, ShaderTechnique, and ShaderPass all store effect data // e is of type ShaderEffect // get the technique ShaderTechnique t; if(!e.techniques.TryGetValue(techniqueName, out t)) return; // effect variables e.SetMatrix("worldView", worldView); e.SetMatrix("projection", projection); e.SetResource("diffuseMap", texture); e.SetSampler("textureSampler", sampler); // set per-mesh/technique settings Graphics.context.InputAssembler.SetVertexBuffers(0, vertexBufferBinding); Graphics.context.InputAssembler.SetIndexBuffer(indexBuffer, SlimDX.DXGI.Format.R16_UInt, 0); Graphics.context.PixelShader.SetSampler(sampler, 0); // render for each pass foreach(ShaderPass p in t.passes) { Graphics.context.InputAssembler.InputLayout = p.layout; p.pass.Apply(Graphics.context); Graphics.context.DrawIndexed(numIndices, 0, 0); } How can I do this?

    Read the article

  • ZTE USB Modem AC2736, connection not possible in Ubuntu 12.04.1 LTS

    - by Fredo
    It's a long post but nearly covers all my experiments and changes I did to my NM. Hope the information is complete and if there are still question, more information can be provided. I've a ZTE AC2736 USB Modem (CDMA modem) which worked fine in Ubuntu 10.04 /11.10. I recently switched to 12.04.1 (precise pangolin). after the switch the first issue I faced was to connect to internet using my USB modem (ISP: Reliance Brand: Netconnect). Tried to run the drivers provided by Reliance but they are old and won't support Kernel 2.6.30 above. since the code was not downloaded with ISO image (of 12.04) i couldn't compile the files provided in such driver. lsusb does detect it as Modem with output similar to 19d2:fff1 ZTE CDMA technologies inc. (or similar as i didn't note it down) If it is detected as USB storage it shows 19d2:fff5 (as per few online forums, i may be wrong here). I used the network Manager and configured the modem to dial #777 (default) and the ISP provided username:password combination. It tries to connect to internet (3-4 times automatically)but fails to get online. once I was able to connect in the monring hours and the message was flashed 'registered on CDMA home network'. I was able to run an update. the kernel was updated with 3.0.2 -pae OR something similar (can find out if required). I surfed the net for about 2 hours later before restarting. After the restart, again the Modem was not able to get me online. I kept trying for many times. I tried changing the setting in NM. One evening after dark I was able to connect to network with same message flash 'registered on CDMA home network' (the message was similar, i'm not precise here,sorry). I was able to surf the net for nearly 3 hours before I switched off my Laptop. I'm not able to get online after that day, It's been 3 days now. I'll try the observered theory of late/early hours sometime soon as mentioned below. Laptop configuration : Make: Lenovo Model: B480 Processor: Intel B950 RAM: 4G DDR3 HDD: 500 G Broadcom Wireless/Bluetooth/Ethernet LAN OS: FreeDOS / Ubuntu 12.04 LTS (dualboot) Kernel 3.0.2-pae Obeservation : I'm able to connect to internet in those hours when generally the speed is high (low usage by other network (wireless) users). like in early mornings or late nights. This is strange as connection should not be dependent on bandwidth usage. Any help would be appraciated to fix this issue. before I decide to cahnge the OS or ISP.

    Read the article

  • Developing a Cost Model for Cloud Applications

    - by BuckWoody
    Note - please pay attention to the date of this post. As much as I attempt to make the information below accurate, the nature of distributed computing means that components, units and pricing will change over time. The definitive costs for Microsoft Windows Azure and SQL Azure are located here, and are more accurate than anything you will see in this post: http://www.microsoft.com/windowsazure/offers/  When writing software that is run on a Platform-as-a-Service (PaaS) offering like Windows Azure / SQL Azure, one of the questions you must answer is how much the system will cost. I will not discuss the comparisons between on-premise costs (which are nigh impossible to calculate accurately) versus cloud costs, but instead focus on creating a general model for estimating costs for a given application. You should be aware that there are (at this writing) two billing mechanisms for Windows and SQL Azure: “Pay-as-you-go” or consumption, and “Subscription” or commitment. Conceptually, you can consider the former a pay-as-you-go cell phone plan, where you pay by the unit used (at a slightly higher rate) and the latter as a standard cell phone plan where you commit to a contract and thus pay lower rates. In this post I’ll stick with the pay-as-you-go mechanism for simplicity, which should be the maximum cost you would pay. From there you may be able to get a lower cost if you use the other mechanism. In any case, the model you create should hold. Developing a good cost model is essential. As a developer or architect, you’ll most certainly be asked how much something will cost, and you need to have a reliable way to estimate that. Businesses and Organizations have been used to paying for servers, software licenses, and other infrastructure as an up-front cost, and power, people to the systems and so on as an ongoing (and sometimes not factored) cost. When presented with a new paradigm like distributed computing, they may not understand the true cost/value proposition, and that’s where the architect and developer can guide the conversation to make a choice based on features of the application versus the true costs. The two big buckets of use-types for these applications are customer-based and steady-state. In the customer-based use type, each successful use of the program results in a sale or income for your organization. Perhaps you’ve written an application that provides the spot-price of foo, and your customer pays for the use of that application. In that case, once you’ve estimated your cost for a successful traversal of the application, you can build that into the price you charge the user. It’s a standard restaurant model, where the price of the meal is determined by the cost of making it, plus any profit you can make. In the second use-type, the application will be used by a more-or-less constant number of processes or users and no direct revenue is attached to the system. A typical example is a customer-tracking system used by the employees within your company. In this case, the cost model is often created “in reverse” - meaning that you pilot the application, monitor the use (and costs) and that cost is held steady. This is where the comparison with an on-premise system becomes necessary, even though it is more difficult to estimate those on-premise true costs. For instance, do you know exactly how much cost the air conditioning is because you have a team of system administrators? This may sound trivial, but that, along with the insurance for the building, the wiring, and every other part of the system is in fact a cost to the business. There are three primary methods that I’ve been successful with in estimating the cost. None are perfect, all are demand-driven. The general process is to lay out a matrix of: components units cost per unit and then multiply that times the usage of the system, based on which components you use in the program. That sounds a bit simplistic, but using those metrics in a calculation becomes more detailed. In all of the methods that follow, you need to know your application. The components for a PaaS include computing instances, storage, transactions, bandwidth and in the case of SQL Azure, database size. In most cases, architects start with the first model and progress through the other methods to gain accuracy. Simple Estimation The simplest way to calculate costs is to architect the application (even UML or on-paper, no coding involved) and then estimate which of the components you’ll use, and how much of each will be used. Microsoft provides two tools to do this - one is a simple slider-application located here: http://www.microsoft.com/windowsazure/pricing-calculator/  The other is a tool you download to create an “Return on Investment” (ROI) spreadsheet, which has the advantage of leading you through various questions to estimate what you plan to use, located here: https://roianalyst.alinean.com/msft/AutoLogin.do?d=176318219048082115  You can also just create a spreadsheet yourself with a structure like this: Program Element Azure Component Unit of Measure Cost Per Unit Estimated Use of Component Total Cost Per Component Cumulative Cost               Of course, the consideration with this model is that it is difficult to predict a system that is not running or hasn’t even been developed. Which brings us to the next model type. Measure and Project A more accurate model is to actually write the code for the application, using the Software Development Kit (SDK) which can run entirely disconnected from Azure. The code should be instrumented to estimate the use of the application components, logging to a local file on the development system. A series of unit and integration tests should be run, which will create load on the test system. You can use standard development concepts to track this usage, and even use Windows Performance Monitor counters. The best place to start with this method is to use the Windows Azure Diagnostics subsystem in your code, which you can read more about here: http://blogs.msdn.com/b/sumitm/archive/2009/11/18/introducing-windows-azure-diagnostics.aspx This set of API’s greatly simplifies tracking the application, and in fact you can use this information for more than just a cost model. After you have the tracking logs, you can plug the numbers into ay of the tools above, which should give a representative cost or in some cases a unit cost. The consideration with this model is that the SDK fabric is not a one-to-one comparison with performance on the actual Windows Azure fabric. Those differences are usually smaller, but they do need to be considered. Also, you may not be able to accurately predict the load on the system, which might lead to an architectural change, which changes the model. This leads us to the next, most accurate method for a cost model. Sample and Estimate Using standard statistical and other predictive math, once the application is deployed you will get a bill each month from Microsoft for your Azure usage. The bill is quite detailed, and you can export the data from it to do analysis, and using methods like regression and so on project out into the future what the costs will be. I normally advise that the architect also extrapolate a unit cost from those metrics as well. This is the information that should be reported back to the executives that pay the bills: the past cost, future projected costs, and unit cost “per click” or “per transaction”, as your case warrants. The challenge here is in the model itself - statistical methods are not foolproof, and the larger the sample (in this case I recommend the entire population, not a smaller sample) is key. References and Tools Articles: http://blogs.msdn.com/b/patrick_butler_monterde/archive/2010/02/10/windows-azure-billing-overview.aspx http://technet.microsoft.com/en-us/magazine/gg213848.aspx http://blog.codingoutloud.com/2011/06/05/azure-faq-how-much-will-it-cost-me-to-run-my-application-on-windows-azure/ http://blogs.msdn.com/b/johnalioto/archive/2010/08/25/10054193.aspx http://geekswithblogs.net/iupdateable/archive/2010/02/08/qampa-how-can-i-calculate-the-tco-and-roi-when.aspx   Other Tools: http://cloud-assessment.com/ http://communities.quest.com/community/cloud_tools

    Read the article

  • WebCenter Customer Spotlight: Los Angeles Department of Water and Power

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution Summary Los Angeles Department of Water and Power (LADWP) is the largest public utility company in the United States with over 1.6 million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. The goal of the project was to implement a newly designed web portal to increase customer self-service while reducing transactions via IVR and automate many of the paper based processes to web based workflows for their 1.6 million customers. LADWP implemented a Self Service Portal using Oracle WebCenter Portal & Oracle WebCenter Content and Oracle SOA Suite for the integration of their complex back-end systems infrastructure. The new portal has received extremely positive feedback from not only the customers and users of the portal, but also other utilities. At Oracle OpenWorld 2012, LADWP won the prestigious WebCenter innovation award for their innovative solution. Company OverviewLos Angeles Department of Water and Power (LADWP) is the largest public utility company in the United States with over 1.6 million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. LADWP also bills most of these customers for sanitation services provided by another department in the city of Los Angeles.  Business ChallengesThe goal of the project was to implement a newly designed web portal that is easy to navigate from a web browser and mobile devices, as well as be the platform for surfacing internet and intranet applications at LADWP. The primary objective of the new portal was to increase customer self-service while reducing the transactions via IVR and walk-up and to automate many of the paper based processes to web based workflows for customers. This includes automation of Self Service implemented through My Account (Bill Pay, Payment History, Bill History, Usage analysis, Service Request Management) Financial Assistance Programs Customer Rebate Programs Turn Off/Turn On/Transfer of Services Outage Reporting eNotification (SMS, email) Solution DeployedLADWP implemented a Self Service Portal using Oracle WebCenter Portal & Oracle WebCenter Content. Using Oracle SOA Suite they integrated various back-end systems including Oracle Siebel CRM IBM Mainframe based CIS FILENET for document management EBP Eletronic Bill Payment System HP Imprint System for BillXML data Other systems including outage reporting systems, SMS service, etc. The new portal’s features include: Complete Graphical redesign based on best practices in UI Design for high usability Customer Self Service implemented through MyAccount (Bill Pay, Payment History, Bill History, Usage Analysis, Service Request Management) Financial Assistance Programs (CRM, WebCenter) Customer Rebate Programs (CRM, WebCenter) Turn On/Off/Transfer of services (Commercial & Residential) Outage Reporting eNotification (SMS, email) Multilingual (English & Spanish) – using WebCenter multi-language support Section 508 (ADA) Compliant Search – Using WebCenter SES (Secured Enterprise Search) Distributed Authorship in WebCenter Content Mobile Access (any Mobile Browser) Business ResultsThe new portal has received extremely positive feedback from not only customers and users of the portal, but also other utilities. At Oracle OpenWorld 2012, LADWP won the prestigious WebCenter innovation award for their innovative solution. Additional Information LADWP OpenWorld presentation Oracle WebCenter Portal Oracle WebCenter Content Oracle SOA Suite

    Read the article

  • An XEvent a Day (26 of 31) – Configuring Session Options

    - by Jonathan Kehayias
    There are 7 Session level options that can be configured in Extended Events that affect the way an Event Session operates.  These options can impact performance and should be considered when configuring an Event Session.  I have made use of a few of these periodically throughout this months blog posts, and in today’s blog post I’ll cover each of the options separately, and provide further information about their usage.  Mike Wachal from the Extended Events team at Microsoft, talked...(read more)

    Read the article

  • What's new in Solaris 11.1?

    - by Karoly Vegh
    Solaris 11.1 is released. This is the first release update since Solaris 11 11/11, the versioning has been changed from MM/YY style to 11.1 highlighting that this is Solaris 11 Update 1.  Solaris 11 itself has been great. What's new in Solaris 11.1? Allow me to pick some new features from the What's New PDF that can be found in the official Oracle Solaris 11.1 Documentation. The updates are very numerous, I really can't include all.  I. New AI Automated Installer RBAC profiles have been introduced to enable delegation of installation tasks. II. The interactive installer now supports installing the OS to iSCSI targets. III. ASR (Auto Service Request) and OCM (Oracle Configuration Manager) have been enabled by default to proactively provide support information and create service requests to speed up support processes. This is optional and can be disabled but helps a lot in supportcases. For further information, see: http://oracle.com/goto/solarisautoreg IV. The new command svcbundle helps you to create SMF manifests without having to struggle with XML editing. (btw, do you know the interactive editprop subcommand in svccfg? The listprop/setprop subcommands are great for scripting and automating, but for an interactive property editing session try, for example, this: svccfg -s svc:/application/pkg/system-repository:default editprop )  V. pfedit: Ever wondered how to delegate editing permissions to certain files? It is well known "sudo /usr/bin/vi /etc/hosts" is not the right way, for sudo elevates the complete vi process to admin levels, and the user can "break" out of the session as root with simply starting a shell from that vi. Now, the new pfedit command provides a solution exactly to this challenge - an auditable, secure, per-user configurable editing possibility. See the pfedit man page for examples.   VI. rsyslog, the popular logging daemon (filters, SSL, formattable output, SQL collect...) has been included in Solaris 11.1 as an alternative to syslog.  VII: Zones: Solaris Zones - as a major Solaris differentiator - got lots of love in terms of new features: ZOSS - Zones on Shared Storage: Placing your zones to shared storage (FC, iSCSI) has never been this easy - via zonecfg.  parallell updates - with S11's bootenvironments updating zones was no problem and meant no downtime anyway, but still, now you can update them parallelly, a way faster update action if you are running a large number of zones. This is like parallell patching in Solaris 10, but with all the IPS/ZFS/S11 goodness.  per-zone fstype statistics: Running zones on a shared filesystems complicate the I/O debugging, since ZFS collects all the random writes and delivers them sequentially to boost performance. Now, over kstat you can find out which zone's I/O has an impact on the other ones, see the examples in the documentation: http://docs.oracle.com/cd/E26502_01/html/E29024/gmheh.html#scrolltoc Zones got RDSv3 protocol support for InfiniBand, and IPoIB support with Crossbow's anet (automatic vnic creation) feature.  NUMA I/O support for Zones: customers can now determine the NUMA I/O topology of the system from within zones.  VIII: Security got a lot of attention too:  Automated security/audit reporting, with builtin reporting templates e.g. for PCI (payment card industry) audits.  PAM is now configureable on a per-user basis instead of system wide, allowing different authentication requirements for different users  SSH in Solaris 11.1 now supports running in FIPS 140-2 mode, that is, in a U.S. government security accredited fashion.  SHA512/224 and SHA512/256 cryptographic hash functions are implemented in a FIPS-compliant way - and on a T4 implemented in silicon! That is, goverment-approved cryptography at HW-speed.  Generally, Solaris is currently under evaluation to be both FIPS and Common Criteria certified.  IX. Networking, as one of the core strengths of Solaris 11, has been extended with:  Data Center Bridging (DCB) - not only setups where network and storage share the same fabric (FCoE, anyone?) can have Quality-of-Service requirements. DCB enables peers to distinguish traffic based on priorities. Your NICs have to support DCB, see the documentation, and additional information on Wikipedia. DataLink MultiPathing, DLMP, enables link aggregation to span across multiple switches, even between those of different vendors. But there are essential differences to the good old bandwidth-aggregating LACP, see the documentation: http://docs.oracle.com/cd/E26502_01/html/E28993/gmdlu.html#scrolltoc VNIC live migration is now supported from one physical NIC to another on-the-fly  X. Data management:  FedFS, (Federated FileSystem) is new, it relies on Solaris 11's NFS referring mechanism to join separate shares of different NFS servers into a single filesystem namespace. The referring system has been there since S11 11/11, in Solaris 11.1 FedFS uses a LDAP - as the one global nameservice to bind them all.  The iSCSI initiator now uses the T4 CPU's HW-implemented CRC32 algorithm - thus improving iSCSI throughput while reducing CPU utilization on a T4 Storage locking improvements are now RAC aware, speeding up throughput with better locking-communication between nodes up to 20%!  XI: Kernel performance optimizations: The new Virtual Memory subsystem ("VM2") scales now to 100+ TB Memory ranges.  The memory predictor monitors large memory page usage, and adjust memory page sizes to applications' needs OSM, the Optimized Shared Memory allows Oracle DBs' SGA to be resized online XII: The Power Aware Dispatcher in now by default enabled, reducing power consumption of idle CPUs. Also, the LDoms' Power Management policies and the poweradm settings in Solaris 11 OS will cooperate. XIII: x86 boot: upgrade to the (Grand Unified Bootloader) GRUB2. Because grub2 differs in the configuration syntactically from grub1, one shall not edit the new grub configuration (grub.cfg) but use the new bootadm features to update it. GRUB2 adds UEFI support and also support for disks over 2TB. XIV: Improved viewing of per-CPU statistics of mpstat. This one might seem of less importance at first, but nowadays having better sorting/filtering possibilities on a periodically updated mpstat output of 256+ vCPUs can be a blessing. XV: Support for Solaris Cluster 4.1: The What's New document doesn't actually mention this one, since OSC 4.1 has not been released at the time 11.1 was. But since then it is available, and it requires Solaris 11.1. And it's only a "pkg update" away. ...aand I seriously need to stop here. There's a lot I missed, Edge Virtual Bridging, lofi tuning, ZFS sharing and crypto enhancements, USB3.0, pulseaudio, trusted extensions updates, etc - but if I mention all those then I effectively copy the What's New document. Which I recommend reading now anyway, it is a great extract of the 300+ new projects and RFE-followups in S11.1. And this blogpost is a summary of that extract.  For closing words, allow me to come back to Request For Enhancements, RFEs. Any customer can request features. Open up a Support Request, explain that this is an RFE, describe the feature you/your company desires to have in S11 implemented. The more SRs are collected for an RFE, the more chance it's got to get implemented. Feel free to provide feedback about the product, as well as about the Solaris 11.1 Documentation using the "Feedback" button there. Both the Solaris engineers and the documentation writers are eager to hear your input.Feel free to comment about this post too. Except that it's too long ;)  wbr,charlie

    Read the article

  • If im using nouveau there is no way to use vdpau with it?

    - by wanas
    Im using ubuntu 12.10 and im using the opensource driver which came reloaded with the distro tried to install the proper driver but I failed, so now im using the opensource driver and its okay, but the only problem is I want to use vdpau coz it accelerate the videos on youtube and smplayer and play it with less cpu usage. my question is: is there a way to enable vdpau with the opensource driver or do I have to use the proper driver from nvidia ?

    Read the article

  • SQL SERVER Configure Management Data Collection in Quick Steps T-SQL Tuesday #005

    This article was written as a response to T-SQL Tuesday #005 Reporting.The three most important components of any computer and server are the CPU, Memory, and Hard disk specification. This post talks about how to get more details about these three most important components using the Management Data Collection. Management Data Collection generates the [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER – A Brief Note on SET TEXTSIZE

    - by pinaldave
    Here is a small conversation I received. I thought though an old topic, indeed a thought provoking for the moment. Question: Is there any difference between LEFT function and SET TEXTSIZE? I really like this small but interesting question. The question does not specify the difference between usage or performance. Anyway we will quickly take a look at how TEXTSIZE works. You can run the following script to see how LEFT and SET TEXTSIZE works. USE TempDB GO -- Create TestTable CREATE TABLE MyTable (ID INT, MyText VARCHAR(MAX)) GO INSERT MyTable (ID, MyText) VALUES(1, REPLICATE('1234567890', 100)) GO -- Select Data SELECT ID, MyText FROM MyTable GO -- Using Left SELECT ID, LEFT(MyText, 10) MyText FROM MyTable GO -- Set TextSize SET TEXTSIZE 10; SELECT ID, MyText FROM MyTable; SET TEXTSIZE 2147483647 GO -- Clean up DROP TABLE MyTable GO Now let us see the usage result which we receive from both of the example. If you are going to ask what you should do – I really do not know. I can tell you where I will use either of the same. LEFT seems to be easy to use but again if you like to do extra work related to SET TEXTSIZE go for it. Here is how I will use SET TEXTSIZE. If I am selecting data from in my SSMS for testing or any other non production related work from a large table which has lots of columns with varchar data, I will consider using this statement to reduce the amount of the data which is retrieved in the result set. In simple word, for testing purpose I will use it. On the production server, there should be a specific reason to use the same. Here is my candid opinion – I do not think they can be directly comparable even though both of them give the exact same result. LEFT is applicable only on the column of a single SELECT statement. where it is used but it SET TEXTSIZE applies to all the columns in the SELECT and follow up SELECT statements till the SET TEXTSIZE is not modified again in the session. Uncomparable! I hope this sample example gives you idea how to use SET TEXTSIZE in your daily use. I would like to know your opinion about how and when do you use this feature. Please leave a comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Reasons for Conducting Keyword Research

    For successful SEO usage of proper keywords/key phrases is needed. Many webmasters lay extra importance to keyword research for every given article of web content. But most of the people new to this ... [Author: Alan Smith - Web Design and Development - June 13, 2010]

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >