Search Results

Search found 20275 results on 811 pages for 'general performance'.

Page 242/811 | < Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >

  • Visual Studio Development on Virtual Box, Boot Camp, or VMWare Fusion

    - by Eli
    I currently have a Mac, 2ghz and 2 gigs of ram, running OS X Leopard and Virtual Box with a Windows 7 Pro 32bit virtual machine. Performance on the virtual machine is fine for minor tasks but is very clunky while trying to multi-task or develop in Visual Studio 2008. What would be my best option for being able to use Visual Studio, keeping cost and time in mind? 1) Upgrade ram to 4 gigs ($100). Will this really improve my performance enough to use Visual Studio in a Windows 7 vm? Or am I just wasting time/money? 2) Reinstall/restore Windows 7 disk image as a Boot Camp partition. I assume this should improve my performance, yes? 3) Purchase VMWare fusion instead of VirtualBox. Does Fusion require less resources to run? I am open to any suggestions. Thanks in advance

    Read the article

  • Iozone: sensible settings for a server with lots of RAM

    - by Frank Brenner
    I have just acquired a server with: 2x quadcore Xeons 48G ECC RAM 5x 160GB SSDs on an LSI 9260-8i Before deploying the target platform, I'd like to collect as much benchmark data as possible, testing I/O with hardware RAID in various configurations, ZFS zRAID, as well as I/O performance on vSphere and with KVM virtualization. In order to see real disk I/O performance without cache effects, I tried running Iozone with a maximum file of more than twice the physical RAM as recommended in the documentation, so: iozone -a -g100G However, as one might expect, this takes far too long to be practicable. (I stopped the run after seven hours..) I'd like to reduce the range of record and file sizes to values that might reflect realistic performance for an application server, hopefully getting the run times to under an hour or so. Any ideas? Thanks.

    Read the article

  • How to Track CPU and Memory Usage Per Process

    - by Mjsk
    I have seen this question asked on here before but was unable to follow the answer which was given. I would like to monitor a processes CPU, Memory, and possibly GPU usage over a given time. The data would be useful if presented in a graph. It would be nice if I could do this using Performance Monitor, but I am open to alternative solutions as well. I have tried using Performance Monitor and my problem is that I'm not sure which performance counters to use since there are so many. I've been looking at a Process, Processor, Memory, etc. but I'm not sure which counters within those categories will be of interest to me. My OS is Windows 7.

    Read the article

  • KVM vs Hyper-V. Which hypervisor is best for windows guests?

    - by user198851
    I am currently testing openstack for windows guests (XP and 7). I have deployed openstack "all in one" on system with following specs Processor corei5. (4 physical cores and 8 Threads with HT Technology) RAM 8 GB. HD 500 GB. I have created 4 windows xp guests with 512MB RAM and 1VCPU. On each windows guest i have installed visual studio 2008 only. In nova.conf CPU Over-Commit ratio is 2 for better performance (as mentioned in openstack operation guide). Using KVM as hyerpvisor. I have observed poor performance when simultaneously using visual studio in four windows instances. How i can improve performance ? Should i use KVM or Hyper-V ? or any other suggestion ?

    Read the article

  • Adding more than 15 digits in excel

    - by user111921
    I want to add more than 20 digits in an Excel cell. The current format of the cell is general, it converts the number to an exponential format. I tried with a number format and accounting but when I enter more than 15 digits it gets converted to 0's. Please recommend steps for stopping Excel from converting data to Exponential Format for 20 digits when in the general format. Example: 12345678901234567890 Excel converts it to 1.23457E+19 in general format. with out using ' before value is there any other way to keep value same.

    Read the article

  • having trouble in mysql if statement

    - by user225269
    I just want to simplify what I am doing before, having multiple php files for all data to be listed. Here is my html form: <table border="0" align="center" cellpadding="0" cellspacing="1" bgcolor="#D3D3D3"> <tr> <form name="formcheck" method="post" action="list.php" onsubmit="return formCheck(this);"> <td> <table border="0" cellpadding="3" cellspacing="1" bgcolor=""> <tr> <td colspan="16" height="25" style="background:#5C915C; color:white; border:white 1px solid; text-align: left"><strong><font size="3">List Students</td> </tr> <tr> <td width="30" height="35"><font size="3">*List:</td> <td width="30"><input name="specific" type="text" id="specific" maxlength="25" value=""> </td> <td><font size="3">*By:</td> <td> <select name="general" id="general"> <font size="3"> <option>Year</option> <option>Address</option> </select></td></td> </tr> <tr> <td width="10"><input align="right" type="submit" name="Submit" value="Submit" > </td> </tr> </form> </table> And here's the form action: <?php $con = mysql_connect("localhost","root","nitoryolai123$%^"); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("school", $con); $gyear= $_POST['general']; if ("YEAR"==$_POST['general']) { $result = mysql_query("SELECT * FROM student WHERE YEAR='{$_POST["specific"]}'"); echo "<table border='1'> <tr> <th>IDNO</th> <th>YEAR</th> <th>LASTNAME</th> <th>FIRSTNAME</th> </tr>"; while($row = mysql_fetch_array($result)) { echo "<tr>"; echo "<td>" . $row['IDNO'] . "</td>"; echo "<td>" . $row['YEAR'] . "</td>"; echo "<td>" . $row['LASTNAME'] . "</td>"; echo "<td>" . $row['FIRSTNAME'] . "</td>"; echo "</tr>"; } echo "</table>"; } mysql_close($con); ?> Please help, how do I equate the YEAR(column in mysql database) and the option box(general). if ("YEAR"==$_POST['general']) please correct me if I'm wrong.

    Read the article

  • Announcing RSS feeds of Microsoft All-In-One Code Framework code samples

    - by Jialiang
    Today, we are not only announcing Sample Browser v2 CTP, but we are also excited to announce the availability of RSS feeds of All-In-One Code Framework code samples. By using these feeds, you can easily track and download the new code samples. English RSS feeds All code samples: http://support.microsoft.com/rss/en/rss.xml ASP.NET code samples: http://support.microsoft.com/rss/en/ASPNET.xml Silverlight code samples: http://support.microsoft.com/rss/en/Silverlight.xml Azure code samples: http://support.microsoft.com/rss/en/Azure.xml COM code samples: http://support.microsoft.com/rss/en/COM.xml Data Platform code samples: http://support.microsoft.com/rss/en/Data%20Platform.xml Library code samples: http://support.microsoft.com/rss/en/Library.xml Office dev code samples: http://support.microsoft.com/rss/en/Office.xml VSX code samples: http://support.microsoft.com/rss/en/VSX.xml Windows 7 code samples: http://support.microsoft.com/rss/en/Windows%207.xml Windows Forms code samples: http://support.microsoft.com/rss/en/Windows%20Forms.xml Windows General code samples: http://support.microsoft.com/rss/en/Windows%20General.xml Windows Service code samples: http://support.microsoft.com/rss/en/Windows%20Service.xml Windows Shell code samples: http://support.microsoft.com/rss/en/Windows%20Shell.xml Windows UI code samples: http://support.microsoft.com/rss/en/Windows%20UI.xml WPF code samples: http://support.microsoft.com/rss/en/WPF.xml ??RSS?? ??????:http://support.microsoft.com/rss/zh-cn/codeplex/rss.xml ASP.NET????:http://support.microsoft.com/rss/zh-cn/codeplex/ASPNET.xml Silverlight????:http://support.microsoft.com/rss/zh-cn/codeplex/Silverlight.xml Azure ????: http://support.microsoft.com/rss/zh-cn/codeplex/Azure.xml COM ????: http://support.microsoft.com/rss/zh-cn/codeplex/COM.xml Data Platform ????: http://support.microsoft.com/rss/zh-cn/codeplex/Data%20Platform.xml Library ????: http://support.microsoft.com/rss/zh-cn/codeplex/Library.xml Office dev ????: http://support.microsoft.com/rss/zh-cn/codeplex/Office.xml VSX ????: http://support.microsoft.com/rss/zh-cn/codeplex/VSX.xml Windows 7 ????: http://support.microsoft.com/rss/zh-cn/codeplex/Windows%207.xml Windows Forms ????: http://support.microsoft.com/rss/zh-cn/codeplex/Windows%20Forms.xml Windows General ????: http://support.microsoft.com/rss/zh-cn/codeplex/Windows%20General.xml Windows Service ????: http://support.microsoft.com/rss/zh-cn/codeplex/Windows%20Service.xml Windows Shell ????: http://support.microsoft.com/rss/zh-cn/codeplex/Windows%20Shell.xml Windows UI ????: http://support.microsoft.com/rss/zh-cn/codeplex/Windows%20UI.xml WPF ????: http://support.microsoft.com/rss/zh-cn/codeplex/WPF.xml

    Read the article

  • Oracle Announces Oracle Big Data Appliance X3-2 and Enhanced Oracle Big Data Connectors

    - by jgelhaus
    Enables Customers to Easily Harness the Business Value of Big Data at Lower Cost Engineered System Simplifies Big Data for the Enterprise Oracle Big Data Appliance X3-2 hardware features the latest 8-core Intel® Xeon E5-2600 series of processors, and compared with previous generation, the 18 compute and storage servers with 648 TB raw storage now offer: 33 percent more processing power with 288 CPU cores; 33 percent more memory per node with 1.1 TB of main memory; and up to a 30 percent reduction in power and cooling Oracle Big Data Appliance X3-2 further simplifies implementation and management of big data by integrating all the hardware and software required to acquire, organize and analyze big data. It includes: Support for CDH4.1 including software upgrades developed collaboratively with Cloudera to simplify NameNode High Availability in Hadoop, eliminating the single point of failure in a Hadoop cluster; Oracle NoSQL Database Community Edition 2.0, the latest version that brings better Hadoop integration, elastic scaling and new APIs, including JSON and C support; The Oracle Enterprise Manager plug-in for Big Data Appliance that complements Cloudera Manager to enable users to more easily manage a Hadoop cluster; Updated distributions of Oracle Linux and Oracle Java Development Kit; An updated distribution of open source R, optimized to work with high performance multi-threaded math libraries Read More   Data sheet: Oracle Big Data Appliance X3-2 Oracle Big Data Appliance: Datacenter Network Integration Big Data and Natural Language: Extracting Insight From Text Thomson Reuters Discusses Oracle's Big Data Platform Connectors Integrate Hadoop with Oracle Big Data Ecosystem Oracle Big Data Connectors is a suite of software built by Oracle to integrate Apache Hadoop with Oracle Database, Oracle Data Integrator, and Oracle R Distribution. Enhancements to Oracle Big Data Connectors extend these data integration capabilities. With updates to every connector, this release includes: Oracle SQL Connector for Hadoop Distributed File System, for high performance SQL queries on Hadoop data from Oracle Database, enhanced with increased automation and querying of Hive tables and now supported within the Oracle Data Integrator Application Adapter for Hadoop; Transparent access to the Hive Query language from R and introduction of new analytic techniques executing natively in Hadoop, enabling R developers to be more productive by increasing access to Hadoop in the R environment. Read More Data sheet: Oracle Big Data Connectors High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database

    Read the article

  • .NET Weak Event Handlers – Part II

    - by João Angelo
    On the first part of this article I showed two possible ways to create weak event handlers. One using reflection and the other using a delegate. For this performance analysis we will further differentiate between creating a delegate by providing the type of the listener at compile time (Explicit Delegate) vs creating the delegate with the type of the listener being only obtained at runtime (Implicit Delegate). As expected, the performance between reflection/delegate differ significantly. With the reflection based approach, creating a weak event handler is just storing a MethodInfo reference while with the delegate based approach there is the need to create the delegate which will be invoked later. So, at creating the weak event handler reflection clearly wins, but what about when the handler is invoked. No surprises there, performing a call through reflection every time a handler is invoked is costly. In conclusion, if you want good performance when creating handlers that only sporadically get triggered use reflection, otherwise use the delegate based approach. The explicit delegate approach always wins against the implicit delegate, but I find the syntax for the latter much more intuitive. // Implicit delegate - The listener type is inferred at runtime from the handler parameter public static EventHandler WrapInDelegateCall(EventHandler handler); public static EventHandler<TArgs> WrapInDelegateCall<TArgs>(EventHandler<TArgs> handler) where TArgs : EventArgs; // Explicite delegate - TListener is the type that defines the handler public static EventHandler WrapInDelegateCall<TListener>(EventHandler handler); public static EventHandler<TArgs> WrapInDelegateCall<TArgs, TListener>(EventHandler<TArgs> handler) where TArgs : EventArgs;

    Read the article

  • SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28

    - by pinaldave
    Jonathan Kehayias (Blog | Twitter) is a MCITP Database Administrator and Developer, who got started in SQL Server in 2004 as a database developer and report writer in the natural gas industry. After spending two and a half years working in TSQL, in late 2006, he transitioned to the role of SQL Database Administrator. His primary passion is performance tuning, where he frequently rewrites queries for better performance and performs in depth analysis of index implementation and usage. Jonathan blogs regularly on SQLBlog, and was a coauthor of Professional SQL Server 2008 Internals and Troubleshooting. On a personal note, I think Jonathan is extremely positive person. In every conversation with him I have found that he is always eager to help and encourage. Every time he finds something needs to be approved, he has contacted me without hesitation and guided me to improve, change and learn. During all the time, he has not lost his focus to help larger community. I am honored that he has accepted to provide his views on complex subject of Wait Types and Queues. Currently I am reading his series on Extended Events. Here is the guest blog post by Jonathan: SQL Server troubleshooting is all about correlating related pieces of information together to indentify where exactly the root cause of a problem lies. In my daily work as a DBA, I generally get phone calls like, “So and so application is slow, what’s wrong with the SQL Server.” One of the funny things about the letters DBA is that they go so well with Default Blame Acceptor, and I really wish that I knew exactly who the first person was that pointed that out to me, because it really fits at times. A lot of times when I get this call, the problem isn’t related to SQL Server at all, but every now and then in my initial quick checks, something pops up that makes me start looking at things further. The SQL Server is slow, we see a number of tasks waiting on ASYNC_IO_COMPLETION, IO_COMPLETION, or PAGEIOLATCH_* waits in sys.dm_exec_requests and sys.dm_exec_waiting_tasks. These are also some of the highest wait types in sys.dm_os_wait_stats for the server, so it would appear that we have a disk I/O bottleneck on the machine. A quick check of sys.dm_io_virtual_file_stats() and tempdb shows a high write stall rate, while our user databases show high read stall rates on the data files. A quick check of some performance counters and Page Life Expectancy on the server is bouncing up and down in the 50-150 range, the Free Page counter consistently hits zero, and the Free List Stalls/sec counter keeps jumping over 10, but Buffer Cache Hit Ratio is 98-99%. Where exactly is the problem? In this case, which happens to be based on a real scenario I faced a few years back, the problem may not be a disk bottleneck at all; it may very well be a memory pressure issue on the server. A quick check of the system spec’s and it is a dual duo core server with 8GB RAM running SQL Server 2005 SP1 x64 on Windows Server 2003 R2 x64. Max Server memory is configured at 6GB and we think that this should be enough to handle the workload; or is it? This is a unique scenario because there are a couple of things happening inside of this system, and they all relate to what the root cause of the performance problem is on the system. If we were to query sys.dm_exec_query_stats for the TOP 10 queries, by max_physical_reads, max_logical_reads, and max_worker_time, we may be able to find some queries that were using excessive I/O and possibly CPU against the system in their worst single execution. We can also CROSS APPLY to sys.dm_exec_sql_text() and see the statement text, and also CROSS APPLY sys.dm_exec_query_plan() to get the execution plan stored in cache. Ok, quick check, the plans are pretty big, I see some large index seeks, that estimate 2.8GB of data movement between operators, but everything looks like it is optimized the best it can be. Nothing really stands out in the code, and the indexing looks correct, and I should have enough memory to handle this in cache, so it must be a disk I/O problem right? Not exactly! If we were to look at how much memory the plan cache is taking by querying sys.dm_os_memory_clerks for the CACHESTORE_SQLCP and CACHESTORE_OBJCP clerks we might be surprised at what we find. In SQL Server 2005 RTM and SP1, the plan cache was allowed to take up to 75% of the memory under 8GB. I’ll give you a second to go back and read that again. Yes, you read it correctly, it says 75% of the memory under 8GB, but you don’t have to take my word for it, you can validate this by reading Changes in Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2. In this scenario the application uses an entirely adhoc workload against SQL Server and this leads to plan cache bloat, and up to 4.5GB of our 6GB of memory for SQL can be consumed by the plan cache in SQL Server 2005 SP1. This in turn reduces the size of the buffer cache to just 1.5GB, causing our 2.8GB of data movement in this expensive plan to cause complete flushing of the buffer cache, not just once initially, but then another time during the queries execution, resulting in excessive physical I/O from disk. Keep in mind that this is not the only query executing at the time this occurs. Remember the output of sys.dm_io_virtual_file_stats() showed high read stalls on the data files for our user databases versus higher write stalls for tempdb? The memory pressure is also forcing heavier use of tempdb to handle sorting and hashing in the environment as well. The real clue here is the Memory counters for the instance; Page Life Expectancy, Free List Pages, and Free List Stalls/sec. The fact that Page Life Expectancy is fluctuating between 50 and 150 constantly is a sign that the buffer cache is experiencing constant churn of data, once every minute to two and a half minutes. If you add to the Page Life Expectancy counter, the consistent bottoming out of Free List Pages along with Free List Stalls/sec consistently spiking over 10, and you have the perfect memory pressure scenario. All of sudden it may not be that our disk subsystem is the problem, but is instead an innocent bystander and victim. Side Note: The Page Life Expectancy counter dropping briefly and then returning to normal operating values intermittently is not necessarily a sign that the server is under memory pressure. The Books Online and a number of other references will tell you that this counter should remain on average above 300 which is the time in seconds a page will remain in cache before being flushed or aged out. This number, which equates to just five minutes, is incredibly low for modern systems and most published documents pre-date the predominance of 64 bit computing and easy availability to larger amounts of memory in SQL Servers. As food for thought, consider that my personal laptop has more memory in it than most SQL Servers did at the time those numbers were posted. I would argue that today, a system churning the buffer cache every five minutes is in need of some serious tuning or a hardware upgrade. Back to our problem and its investigation: There are two things really wrong with this server; first the plan cache is excessively consuming memory and bloated in size and we need to look at that and second we need to evaluate upgrading the memory to accommodate the workload being performed. In the case of the server I was working on there were a lot of single use plans found in sys.dm_exec_cached_plans (where usecounts=1). Single use plans waste space in the plan cache, especially when they are adhoc plans for statements that had concatenated filter criteria that is not likely to reoccur with any frequency.  SQL Server 2005 doesn’t natively have a way to evict a single plan from cache like SQL Server 2008 does, but MVP Kalen Delaney, showed a hack to evict a single plan by creating a plan guide for the statement and then dropping that plan guide in her blog post Geek City: Clearing a Single Plan from Cache. We could put that hack in place in a job to automate cleaning out all the single use plans periodically, minimizing the size of the plan cache, but a better solution would be to fix the application so that it uses proper parameterized calls to the database. You didn’t write the app, and you can’t change its design? Ok, well you could try to force parameterization to occur by creating and keeping plan guides in place, or we can try forcing parameterization at the database level by using ALTER DATABASE <dbname> SET PARAMETERIZATION FORCED and that might help. If neither of these help, we could periodically dump the plan cache for that database, as discussed as being a problem in Kalen’s blog post referenced above; not an ideal scenario. The other option is to increase the memory on the server to 16GB or 32GB, if the hardware allows it, which will increase the size of the plan cache as well as the buffer cache. In SQL Server 2005 SP1, on a system with 16GB of memory, if we set max server memory to 14GB the plan cache could use at most 9GB  [(8GB*.75)+(6GB*.5)=(6+3)=9GB], leaving 5GB for the buffer cache.  If we went to 32GB of memory and set max server memory to 28GB, the plan cache could use at most 16GB [(8*.75)+(20*.5)=(6+10)=16GB], leaving 12GB for the buffer cache. Thankfully we have SQL Server 2005 Service Pack 2, 3, and 4 these days which include the changes in plan cache sizing discussed in the Changes to Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2 blog post. In real life, when I was troubleshooting this problem, I spent a week trying to chase down the cause of the disk I/O bottleneck with our Server Admin and SAN Admin, and there wasn’t much that could be done immediately there, so I finally asked if we could increase the memory on the server to 16GB, which did fix the problem. It wasn’t until I had this same problem occur on another system that I actually figured out how to really troubleshoot this down to the root cause.  I couldn’t believe the size of the plan cache on the server with 16GB of memory when I actually learned about this and went back to look at it. SQL Server is constantly telling a story to anyone that will listen. As the DBA, you have to sit back and listen to all that it’s telling you and then evaluate the big picture and how all the data you can gather from SQL about performance relate to each other. One of the greatest tools out there is actually a free in the form of Diagnostic Scripts for SQL Server 2005 and 2008, created by MVP Glenn Alan Berry. Glenn’s scripts collect a majority of the information that SQL has to offer for rapid troubleshooting of problems, and he includes a lot of notes about what the outputs of each individual query might be telling you. When I read Pinal’s blog post SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28, I noticed that he referenced Checking Memory Related Performance Counters in his post, but there was no real explanation about why checking memory counters is so important when looking at an I/O related wait type. I thought I’d chat with him briefly on Google Talk/Twitter DM and point this out, and offer a couple of other points I noted, so that he could add the information to his blog post if he found it useful.  Instead he asked that I write a guest blog for this. I am honored to be a guest blogger, and to be able to share this kind of information with the community. The information contained in this blog post is a glimpse at how I do troubleshooting almost every day of the week in my own environment. SQL Server provides us with a lot of information about how it is running, and where it may be having problems, it is up to us to play detective and find out how all that information comes together to tell us what’s really the problem. This blog post is written by Jonathan Kehayias (Blog | Twitter). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • The C++ Standard Template Library as a BDB Database (part 1)

    - by Gregory Burd
    If you've used C++ you undoubtedly have used the Standard Template Libraries. Designed for in-memory management of data and collections of data this is a core aspect of all C++ programs. Berkeley DB is a database library with a variety of APIs designed to ease development, one of those APIs extends and makes use of the STL for persistent, transactional data storage. dbstl is an STL standard compatible API for Berkeley DB. You can make use of Berkeley DB via this API as if you are using C++ STL classes, and still make full use of Berkeley DB features. Being an STL library backed by a database, there are some important and useful features that dbstl can provide, while the C++ STL library can't. The following are a few typical use cases to use the dbstl extensions to the C++ STL for data storage. When data exceeds available physical memory.Berkeley DB dbstl can vastly improve performance when managing a dataset which is larger than available memory. Performance suffers when the data can't reside in memory because the OS is forced to use virtual memory and swap pages of memory to disk. Switching to BDB's dbstl improves performance while allowing you to keep using STL containers. When you need concurrent access to C++ STL containers.Few existing C++ STL implementations support concurrent access (create/read/update/delete) within a container, at best you'll find support for accessing different containers of the same type concurrently. With the Berkeley DB dbstl implementation you can concurrently access your data from multiple threads or processes with confidence in the outcome. When your objects are your database.You want to have object persistence in your application, and store objects in a database, and use the objects across different runs of your application without having to translate them to/from SQL. The dbstl is capable of storing complicated objects, even those not located on a continous chunk of memory space, directly to disk without any unnecessary overhead. These are a few reasons why you should consider using Berkeley DB's C++ STL support for your embedded database application. In the next few blog posts I'll show you a few examples of this approach, it's easy to use and easy to learn.

    Read the article

  • CodePlex Daily Summary for Thursday, December 09, 2010

    CodePlex Daily Summary for Thursday, December 09, 2010Popular ReleasesAutoLoL: AutoLoL v1.4.3: AutoLoL now supports importing the build pages from Mobafire.com as well! Just insert the url to the build and voila. (For example: http://www.mobafire.com/league-of-legends/build/unforgivens-guide-how-to-build-a-successful-mordekaiser-24061) Stable release of AutoChat (It is still recommended to use with caution and to read the documentation) It is now possible to associate *.lolm files with AutoLoL to quickly open them The selected spells are now displayed in the masteries tab for qu...SubtitleTools: SubtitleTools 1.2: - Added auto insertion of RLE (RIGHT-TO-LEFT EMBEDDING) Unicode character for the RTL languages. - Fixed delete rows issue.PHP Manager for IIS: PHP Manager 1.1 for IIS 7: This is a final stable release of PHP Manager 1.1 for IIS 7. This is a minor incremental release that contains all the functionality available in 53121 plus additional features listed below: Improved detection logic for existing PHP installations. Now PHP Manager detects the location to php.ini file in accordance to the PHP specifications Configuring date.timezone. PHP Manager can automatically set the date.timezone directive which is required to be set starting from PHP 5.3 Ability to ...Algorithmia: Algorithmia 1.1: Algorithmia v1.1, released on December 8th, 2010.SuperSocket, an extensible socket application framework: SuperSocket 1.0 SP1: Fixed bugs: fixed a potential bug that the running state hadn't been updated after socket server stopped fixed a synchronization issue when clearing timeout session fixed a bug in ArraySegmentList fixed a bug on getting configuration valueCslaGenFork: CslaGenFork 4.0 CTP 2: The version is 4.0.1 CTP2 and was released 2010 December 7 and includes the following files: CslaGenFork 4.0.1-2010-12-07 Setup.msi Templates-2010-10-07.zip For getting started instructions, refer to How to section. Overview of the changes Since CTP1 there were 53 work items closed (28 features, 24 issues and 1 task). During this 60 days a lot of work has been done on several areas. First the stereotypes: EditableRoot is OK EditableChild is OK EditableRootCollection is OK Editable...Windows Workflow Foundation on Codeplex: WF AppFabric Caching Activity Pack 0.1: This release includes a set of AppFabric Caching Activities that allow you to use Windows Server AppFabric Caching with WF4. Video endpoint.tv - New WF4 Caching Activities for Windows Server AppFabric ActivitiesDataCacheAdd DataCacheGet DataCachePut DataCacheGet DataCacheRemove WaitForCacheBulkNotification WaitForCacheNotification WaitForFailureNotification WaitForItemNotification WaitForRegionNotification Unit TestsUnit tests are included in the source. Be sure to star...My Web Pages Starter Kit: 1.3.1 Production Release (Security HOTFIX): Due to a critical security issue, it's strongly advised to update the My Web Pages Starter Kit to this version. Possible attackers could misuse the image upload to transmit any type of file to the website. If you already have a running version of My Web Pages Starter Kit 1.3.0, you can just replace the ftb.imagegallery.aspx file in the root directory with the one attached to this release.EnhSim: EnhSim 2.2.0 ALPHA: 2.2.0 ALPHAThis release adds in the changes for 4.03a. at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Updated En...ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.4: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: popup WhiteSpaceFilterAttribute tested on mozilla, safari, chrome, opera, ie 9b/8/7/6nopCommerce. ASP.NET open source shopping cart: nopCommerce 1.90: To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).TweetSharp: TweetSharp v2.0.0.0 - Preview 4: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 4 ChangesReintroduced fluent interface support via satellite assembly Added entities support, entity segmentation, and ITweetable/ITweeter interfaces for client development Numerous fixes reported by preview users Preview 3 ChangesNumerous fixes and improvements to core engine Twitter API coverage: a...Aura: Aura Preview 1: Rewritten from scratch. This release supports getting color only from icon of foreground window.MBG Extensions Library: MBG.Extensions_v1.3: MBG.Extensions Collections.CollectionExtensions - AddIfNew - RemoveRange (Moved From ListExtensions to here, where it should have been) Collections.EnumerableExtensions - ToCommaSeparatedList has been replaced by: Join() and ToValueSeparatedList Join is for a single line of values. ToValueSeparatedList is generally for collection and will separate each entity in the collection by a new line character - ToQueue - ToStack Core.ByteExtensions - TripleDESDecrypt Core.DateTimeExtension...myCollections: Version 1.2: New in version 1.2: Big performance improvement. New Design (Added Outlook style View, New detail view, New Groub By...) Added Sort by Media Added Manage Movie Studio Zoom preference is now saved. Media name are now editable. Added Portuguese version You can now Hide details panel Add support for FLAC tags You can now imports books from BibTex Xml file BugFixingmytrip.mvc (CMS & e-Commerce): mytrip.mvc 1.0.49.0 beta: mytrip.mvc 1.0.49.0 beta web Web for install hosting System Requirements: NET 4.0, MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) mytrip.mvc 1.0.49.0 beta src System Requirements: Visual Studio 2010 or Web Deweloper 2010 MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) Connector/Net 6.3.4, MVC3 RC WARNING For run and debug mytrip.mvc 1.0.49.0 beta src download and ...Menu and Context Menu for Silverlight 4.0: Silverlight Menu and Context Menu v2.3 Beta: - Added keyboard navigation support with access keys - Shortcuts like Ctrl-Alt-A are now supported(where the browser permits it) - The PopupMenuSeparator is now completely based on the PopupMenuItem class - Moved item manipulation code to a partial class in PopupMenuItemsControl.cs - Moved menu management and keyboard navigation code to the new PopupMenuManager class - Simplified the layout by removing the RootGrid element(all content is now placed in OverlayCanvas and is accessed by the new ...MiniTwitter: 1.62: MiniTwitter 1.62 ???? ?? ??????????????????????????????????????? 140 ?????????????????????????? ???????????????????????????????? ?? ??????????????????????????????????Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (December 2010): The release is targetted for stable daily use. With improved performance and enhanced compatibility with several latest PHP open source applications; it makes this release perfect replacement of your old PHP runtime. Changes made within this release include following and much more: Performance improvements based on real-world applications experience. We determined biggest bottlenecks and we found and removed overheads causing performance problems in many PHP applications. Reimplemented nat...Chronos WPF: Chronos v2.0 Beta 3: Release notes: Updated introduction document. Updated Visual Studio 2010 Extension (vsix) package. Added horizontal scrolling to the main window TaskBar. Added new styles for ListView, ListViewItem, GridViewColumnHeader, ... Added a new WindowViewModel class (allowing to fetch data). Added a new Navigate method (with several overloads) to the NavigationViewModel class (protected). Reimplemented Task usage for the WorkspaceViewModel.OnDelete method. Removed the reflection effect...New Projects:WinK: WinK Project1000 bornes: This project is the adaptation of the famous French card game 1000 bornes (http://en.wikipedia.org/wiki/Mille_Bornes) There will be 3 types of clients: - Windows Application (WPF) - Internet Application (ASP.NET, Ajax) - Silverlight Application It's developed in C#.AutomaTones: BDSA Project 2010. Team Anders is developing an application that uses automatons to generate music.EIRENE: UnknownFinal: TDD driven analize of avalable tdd frameworks ect.HomeGrown Database Project tools: A set of tools that can be used to deploy Visual Studio SQL Databse and Server Projects. Developed using Visual Basic .Net 4.0mcssolution: no summaryMobile-enabled ASP.NET Web Forms / MVC application samples: Code samples for the whitepaper "Add mobile pages to your ASP.NET Web Forms / MVC application" linked from http://asp.net/mobileMSDI Projects: www.msdi.cnObject TreeView Visualizer: This is a Helper Library for easy Access to Visual a Object to an treeview. Nice feature to display data, if an error happen. One Place To Rule Them All: Desktop system to manage basics system functions in 3d environmant. Optra also provide community support and easy transfer data and setups between varius devices using xmpp protocol and OpenFire jabber server.Performance Data Suite: The Performance Data Suite will help you to monitor, analyze and optimize your server infrastructure. There will be predefined sets of data collections(e.g. MySQL, Apache, IIS) but it will also help you to create collections on your own.Secure Group Communication in AdHoc Networks: Secure Group Communication in AdHoc Networksimweb: simweb - is a research project which own by GCR and all its copyright belong to GCR. You can download the code for reference only but not able to be commercial without a fees.starLiGHT.Engine: starLiGHT.Engine is a set of libraries for indie game developers using XNA. It is in development for some years now as a closed source project. Now I will release some (most) parts as Open Source (dual licensing).UMC? ???? .NET ??? ?? ???? ???: ???(Junil, Um)? ???? .NET ???? ?? ?? ???? ??? ???.University of Ottawa tour for WP7: This is a Windows Phone 7 tour guide app for the University of Ottawa. vutpp for VS2010: C++ UnitTest Gui Addin????: ????

    Read the article

  • SQL SERVER – 2008 – Missing Index Script – Download

    - by pinaldave
    Download Missing Index Script with Unused Index Script Performance Tuning is quite interesting and Index plays a vital role in it. A proper index can improve the performance and a bad index can hamper the performance. Here is the script from my script bank which I use to identify missing indexes on any database. Please note, if you should not create all the missing indexes this script suggest. This is just for guidance. You should not create more than 5-10 indexes per table. Additionally, this script sometime does not give accurate information so use your common sense. Any way, the scripts is good starting point. You should pay attention to Avg_Estimated_Impact when you are going to create index. The index creation script is also provided in the last column. Download Missing Index Script with Unused Index Script -- Missing Index Script -- Original Author: Pinal Dave (C) 2011 SELECT TOP 25 dm_mid.database_id AS DatabaseID, dm_migs.avg_user_impact*(dm_migs.user_seeks+dm_migs.user_scans) Avg_Estimated_Impact, dm_migs.last_user_seek AS Last_User_Seek, OBJECT_NAME(dm_mid.OBJECT_ID,dm_mid.database_id) AS [TableName], 'CREATE INDEX [IX_' + OBJECT_NAME(dm_mid.OBJECT_ID,dm_mid.database_id) + '_' + REPLACE(REPLACE(REPLACE(ISNULL(dm_mid.equality_columns,''),', ','_'),'[',''),']','') + CASE WHEN dm_mid.equality_columns IS NOT NULL AND dm_mid.inequality_columns IS NOT NULL THEN '_' ELSE '' END + REPLACE(REPLACE(REPLACE(ISNULL(dm_mid.inequality_columns,''),', ','_'),'[',''),']','') + ']' + ' ON ' + dm_mid.statement + ' (' + ISNULL (dm_mid.equality_columns,'') + CASE WHEN dm_mid.equality_columns IS NOT NULL AND dm_mid.inequality_columns IS NOT NULL THEN ',' ELSE '' END + ISNULL (dm_mid.inequality_columns, '') + ')' + ISNULL (' INCLUDE (' + dm_mid.included_columns + ')', '') AS Create_Statement FROM sys.dm_db_missing_index_groups dm_mig INNER JOIN sys.dm_db_missing_index_group_stats dm_migs ON dm_migs.group_handle = dm_mig.index_group_handle INNER JOIN sys.dm_db_missing_index_details dm_mid ON dm_mig.index_handle = dm_mid.index_handle WHERE dm_mid.database_ID = DB_ID() ORDER BY Avg_Estimated_Impact DESC GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Download, SQL Index, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Optimize SUMMARIZE with ADDCOLUMNS in Dax #ssas #tabular #dax #powerpivot

    - by Marco Russo (SQLBI)
    If you started using DAX as a query language, you might have encountered some performance issues by using SUMMARIZE. The problem is related to the calculation you put in the SUMMARIZE, by adding what are called extension columns, which compute their value within a filter context defined by the rows considered in the group that the SUMMARIZE uses to produce each row in the output. Most of the time, for simple table expressions used in the first parameter of SUMMARIZE, you can optimize performance by removing the extended columns from the SUMMARIZE and adding them by using an ADDCOLUMNS function. In practice, instead of writing SUMMARIZE( <table>, <group_by_column>, <column_name>, <expression> ) you can write: ADDCOLUMNS(     SUMMARIZE( <table>, <group by column> ),     <column_name>, CALCULATE( <expression> ) ) The performance difference might be huge (orders of magnitude) but this optimization might produce a different semantic and in these cases it should not be used. A longer discussion of this topic is included in my Best Practices Using SUMMARIZE and ADDCOLUMNS article on SQLBI, which also include several details about the DAX syntax with extended columns. For example, did you know that you can create an extended column in SUMMARIZE and ADDCOLUMNS with the same name of existing measures? It is *not* a good thing to do, and by reading the article you will discover why. Enjoy DAX!

    Read the article

  • PeopleSoft 9.2 Financial Management Training – Now Available

    - by Di Seghposs
    A guest post from Oracle University.... Whether you’re part of a project team implementing PeopleSoft 9.2 Financials for your company or a partner implementing for your customer, you should attend some of the new training courses.  Everyone knows project team training is critical at the start of a new implementation, including configuration training on the core application modules being implemented. Oracle offers these courses to help customers and partners understand the functionality most relevant to complete end-to-end business processes, to identify any additional development work that may be necessary to customize applications, and to ensure integration between different modules within the overall business process. Training will provide you with the skills and knowledge needed to ensure a smooth, rapid and successful implementation of your PeopleSoft applications in support of your organization’s financial management processes - including step-by-step instruction for implementing, using, and maintaining your applications. It will also help you understand the application and configuration options to make the right implementation decisions. Courses vary based on your role in the implementation and on-going use of the application, and should be a part of every implementation plan, whether it is for an upgrade or a new rollout. Here’s some of the roles that should consider training: · Configuration or functional implementers · Implementation Consultants (Oracle partners) · Super Users · Business Analysts · Financial Reporting Specialists · Administrators PeopleSoft Financial Management Courses: New Features Course: · PeopleSoft Financial Solutions Rel 9.2 New Features Functional Training: · PeopleSoft General Ledger Rel 9.2 · PeopleSoft Payables Rel 9.2 · PeopleSoft Receivables Rel 9.2 · PeopleSoft Asset Management Rel 9.2 · Expenses Rel 9.2 · PeopleSoft Project Costing Rel 9.2 · PeopleSoft Billing Rel 9.2 · PeopleSoft PS / nVision for General Ledger Rel 9.2 Accelerated Courses (include content from two courses for more experienced team members): · PeopleSoft General Ledger Foundation Accelerated Rel 9.2 · PeopleSoft Billing / Receivables Accelerated Rel 9.2 · PeopleSoft Purchasing / Payable Accelerated Rel 9.2 View PeopleSoft Training Overview Video

    Read the article

  • Great Blogs About Oracle Solaris 11

    - by Markus Weber
    Now that Oracle Solaris 11 has been released, why not blog about blogs. There is of course a tremendous amount of resource and information available, but valuable insights directly from people actually building the product is priceless. Here's a list of such great blogs. NOTE: If you think we missed some good ones, please let us know in the comments section !  Topic Title Author Top 11 Things My 11 favourite Solaris 11 features Darren Moffat Top 11 Things These are 11 of my favorite things! Mike Gerdts Top 11 Things 11 reason to love Solaris 11     Jim Laurent SysAdmin Resources Solaris 11 Resources for System Administrators Rick Ramsey Overview Oracle Solaris 11: The First Cloud OS Larry Wake Overview What's a "Cloud Operating System"? Harry Foxwell Overview What's New in Oracle Solaris 11 Jeff Victor Try it ! Virtually the fastest way to try Solaris 11 (and Solaris 10 zones) Dave Miner Upgrade Upgrading Solaris 11 Express b151a with support to Solaris 11 Alan Hargreaves IPS The IPS System Repository Tim Foster IPS Building a Solaris 11 repository without network connection Jim Laurent IPS IPS Self-assembly – Part 1: overlays Tim Foster IPS Self assembly – Part 2: multiple packages delivering configuration Tim Foster Security Immutable Zones on Encrypted ZFS Darren Moffat Security User home directory encryption with ZFS Darren Moffat Security Password (PAM) caching for Solaris su - "a la sudo" Darren Moffat Security Completely disabling root logins on Solaris 11 Darren Moffat Security OpenSSL Version in Solaris Darren Moffat Security Exciting Crypto Advances with the T4 processor and Oracle Solaris 11 Valerie Fenwick Performance Critical Threads Optimization Rafael Vanoni Performance SPARC T4-2 Delivers World Record SPECjvm2008 Result with Oracle Solaris 11 BestPerf Blog Performance Recent Benchmarks Using Oracle Solaris 11 BestPerf Blog Predictive Self Healing Introducing SMF Layers Sean Wilcox Predictive Self Healing Oracle Solaris 11 - New Fault Management Features Gavin Maltby Desktop What's new on the Solaris 11 Desktop? Calum Benson Desktop S11 X11: ye olde window system in today's new operating system Alan Coopersmith Desktop Accessible Oracle Solaris 11 - released! Peter Korn

    Read the article

  • links for 2010-04-13

    - by Bob Rhubart
    Frederic Michiar: Manage a flexible and elastic Data Center with Oracle VM Manager Frederic Michiar shares a list of Oracle VM resources. (tags: otn oracle virtualization) Mona Rakibe: BAM Data Control in multiple ADF Faces Components "When two or more ADF Faces components must display the same data, and are bound to the same Oracle BAM data control definition, we have to make sure that we wrap each ADF Faces component in an ADF task flow, and set the Data Control Scope to isolated. " Mona Rakibe shows you how. (tags: oracle otn soa bam adf) Martin Widlake: Performance Tipping Points Martin Widlake offers "a nice example of a performance tipping point. This is where Everything is OK until you reach a point where it all quickly cascades to Not OK." (tags: oracle otn database architecture performance) Steve Chan: EBS Techstack Sessions at OAUG/Collaborate 2010 Steve Chan shares a list of Collaborate 2010 sessions featuring Oracle E-Business Suite Applications Technology Group staffers. (tags: oracle otn collaborate2010 ebs) @ORACLENERD: Developing in APEX Oracle ACE Chet Justice counts the ways... (tags: otn oracle oracleace apex) @bex: Almost Time For IOUG Collaborate 2010 Oracle ACE Director Bex Huff shares details on his Collaborate 2010 presentation, "The Top 10 Things Oracle UCM Customers Need To Know About WebLogic:" (tags: oracle otn oracleace collaborate2010 weblogic ucm enterprise2.0)

    Read the article

  • MySQL Connect Content Catalog Live

    - by Bertrand Matthelié
    The MySQL Connect Content Catalog is now live and you can check out the great program the content committee put together for you. We received a lot of very good submissions during the call for papers and we’d like to thank you all again for those, it was a very difficult job to choose. Overall MySQL Connect will in two days include: Keynotes, with speakers such as Oracle Chief Corporate Architect Edward Screven and Vice President of MySQL Engineering Tomas Ulin 66 conference sessions, enabling you to hear from: Oracle engineers on MySQL 5.6 new features, InnoDB, performance and scalability, security, NoSQL, MySQL Cluster…and more MySQL users and customers including Facebook, Twitter, PayPal, Yahoo, Ticketmaster, and CERN Internationally recognized MySQL community members and partners on topics such as performance, security or high availability 6 Birds-of-a-feather sessions, in which you’ll be able to engage into passionate discussions about replication, backup and other subjects, and help influence the MySQL roadmap 8 Hands-On Labs designed to give you hands-on experience about MySQL replication, MySQL Cluster, the MySQL Performance Schema…and more Demo pods about MySQL Workbench, MySQL Cluster, MySQL Enterprise Edition and other technologies and services We’ll also have networking receptions on both Saturday and Sunday evening, enabling you to discuss with the Oracle engineers developing and supporting the MySQL products, as well as with other users and customers. Additionally, you’ll have the opportunity to meet and learn from our partners in the exhibition hall. Some of the MySQL Connect speakers such as Henrik Ingo and Andrew Morgan have already blogged about their presence at MySQL Connect, and you can find more information about their sessions or their thoughts about the conference in their blogs. We also published an interview with Tomas Ulin a few weeks ago. In summary, don’t miss MySQL Connect! And you only have about 3 weeks left to register with the early bird discount and save US$500. Don’t wait, Register Now! Interested in sponsorship and exhibit opportunities? You will find more information here.

    Read the article

  • Java, the Cloud, and Oracle at QCon San Francisco 2011

    - by Bob Rhubart
    If you're part of the lucky bunch attending this week's sold-out QCon San Francisco conference at Westin San Francisco Market Street, I'd like to bring several sessions to your attention. On Wednesday Nov 16, Alex Buckley, specification lead for the Java Language and the Java Virtual Machine at Oracle, will present Java 7 and 8: Where We've Been, Where We're Going, part of the Why is Java still sexy? track. The session begins at 10:35 a.m. in the Olympic room. On Thursday Nov 17, Tyler Jewell, VP Product Management for Oracle's Platform as a Service, will participate in the Performance and Scalability Panel moderated by InfoQ founder and QCon SF Program Committee Member Floyd Marinescu. That panel, part of the Performance and Scalability Solutions track, begins at 10:35 a.m. in the Olympic room. Following that panel discussion, Tyler will fly solo with a presentation on Java EE 7: Developing for the Cloud, also part of the Performance and Scalability Solutions track. That session kicks off at 12:05 p.m., also in the Olympic room. On Friday Nov 18 Tyler will jump tracks, so to speak, when he presents The Architecture of Oracle's Public Cloud, part of the Architecture Case Studies: Cloud track. That session begins at 4:50 p.m. in the Stanford room. Of course, QCon also offers ample meet-and-greet opportunities. One such opportunity happens in the hospitality suite hosted by the Java Community Process Executive Committee. That shindig gets in gear at 5:50 pm on Thursday. Throughout the QCon San Francisco conference, members of the OTN team (including your's truly) and members of the Oracle Fusion Middleware team will be on hand at the OTN booth in the conference lobby. Stop by to say hello, score some swag, and catch a demo or two.

    Read the article

  • You Need BRM When You have EBS – and Even When You Don’t!

    - by bwalstra
    Here is a list of criteria to test your business-systems (Oracle E-Business Suite, EBS) or otherwise to support your lines of digital business - if you score low, you need Oracle Billing and Revenue Management (BRM). Functions Scalability High Availability (99.999%) Performance Extensibility (e.g. APIs, Tools) Upgradability Maintenance Security Standards Compliance Regulatory Compliance (e.g. SOX) User Experience Implementation Complexity Features Customer Management Real-Time Service Authorization Pricing/Promotions Flexibility Subscriptions Usage Rating and Pricing Real-Time Balance Mgmt. Non-Currency Resources Billing & Invoicing A/R & G/L Payments & Collections Revenue Assurance Integration with Key Enterprise Applications Reporting Business Intelligence Order & Service Mgmt (OSM) Siebel CRM E-Business Suite On-/Off-line Mediation Payment Processing Taxation Royalties & Settlements Operations Management Disaster Recovery Overall Evaluation Implementation Configuration Extensibility Maintenance Upgradability Functional Richness Feature Richness Usability OOB Integrations Operations Management Leveraging Oracle Technology Overall Fit for Purpose You need Oracle BRM: Built for high-volume transaction processing Monetizes any service or event based on any metric Supports high-volume usage rating, pricing and promotions Provides real-time charging, service authorization and balance management Supports any account structure (e.g. corporate hierarchies etc.) Scales from low volumes to extremely high volumes of transactions (e.g. billions of trxn per hour) Exposes every single function via APIs (e.g. Java, C/C++, PERL, COM, Web Services, JCA) Immediate Business Benefits of BRM: Improved business agility and performance Supports the flexibility, innovation, and customer-centricity required for current and future business models Faster time to market for new products and services Supports 360 view of the customer in real-time – products can be launched to targeted customers at a record-breaking pace Streamlined deployment and operation Productized integrations, standards-based APIs, and OOB enablement lower deployment and maintenance costs Extensible and scalable solution Minimizes risk – initial phase deployed rapidly; solution extended and scaled seamlessly per business requirements Key Considerations Productized integration with key Oracle applications Lower integration risks and cost Efficient order-to-cash process Engineered solution – certification on Exa platform Exadata tested at PayPal in the re-platforming project Optimal performance of Oracle assets on Oracle hardware Productized solution in Rapid Offer Design and Order Delivery Fast offer design and implementation Significantly shorter order cycle time Productized integration with Oracle Enterprise Manager Visibility to system operability for optimal up time

    Read the article

  • SOA & BPM Best of Oracle OpenWorld 2011

    - by JuergenKress
    Oracle OpenWorld 2011 is over – what important updates did you miss? Keynotes: Best of Oracle OpenWorld keynotes and general session is available on-demand: " + __flash__argumentsToXML(arguments,0) + "")); }' s_getswfurl='function () { return eval(instance.CallFunction("" + __flash__argumentsToXML(arguments,0) + "")); }' s_getcharset='function () { return eval(instance.CallFunction("" + __flash__argumentsToXML(arguments,0) + "")); }' s_getversion='function () { return eval(instance.CallFunction("" + __flash__argumentsToXML(arguments,0) + "")); }' s_getmovieid='function () { return eval(instance.CallFunction("" + __flash__argumentsToXML(arguments,0) + "")); }' s_getpageurl='function () { return eval(instance.CallFunction("" + __flash__argumentsToXML(arguments,0) + "")); }' s_getpagename='function () { return eval(instance.CallFunction("" + __flash__argumentsToXML(arguments,0) + "")); }' s_getaccount='function () { return eval(instance.CallFunction("" + __flash__argumentsToXML(arguments,0) + "")); }' s_gettrackclickmap='function () { return eval(instance.CallFunction("" + __flash__argumentsToXML(arguments,0) + "")); }' s_getdomindex='function () { return eval(instance.CallFunction("" + __flash__argumentsToXML(arguments,0) + "")); }' onomnitureunload='function () { return eval(instance.CallFunction("" + __flash__argumentsToXML(arguments,0) + "")); }' We recommend to watch: Oracle Cloud Computing Larry Ellison, CEO, Oracle Watch full-length keynote   Middleware General Session Hasan Rizvi, SVP, Oracle Watch full-length general session Presentations: All presentations are available online at the OpenWorld Content Catalog Product highlight: Was to launch of BPM Suite 11.1.1.5 Feature Pack Released and the Oracle Process Accelerators. For details please visit the Oracle BPM team blog and the Oracle SOA team blog.

    Read the article

  • Is there an alternative to Google Code Search?

    - by blunders
    Per the Official Google Blog: Code Search, which was designed to help people search for open source code all over the web, will be shut down along with the Code Search API on January 15, 2012. Google Code Search is now gone, and since that makes it much harder to understand the features it presented, here's my attempt to render them via information I gathered from a cache of the page for the Search Options: The "In Search Box" just notes the syntax to type the command directly in the main search box instead of using the advance search interface. Package (In Search Box: "package:linux-2.6") Language (In Search Box: "lang:c++") (OPTIONS: any language, actionscript, ada, applescript, asp, assembly, autoconf, automake, awk, basic, bat, c, c#, c++, caja, cobol, coldfusion, configure, css, d, eiffel, erlang, fortran, go, haskell, inform, java, java, javascript, jsp, lex, limbo, lisp, lolcode, lua, m4, makefile, maple, mathematica, matlab, messagecatalog, modula2, modula3, objectivec, ocaml, pascal, perl, php, pod, prolog, proto, python, python, r, rebol, ruby, sas, scheme, scilab, sgml, shell, smalltalk, sml, sql, svg, tcl, tex, texinfo, troff, verilog, vhdl, vim, xslt, xul, yacc) File (In Search Box: "file:^.*.java$") Class (In Search Box: "class:HashMap") Function (In Search Box: "function:toString") License (In Search Box: "license:mozilla") (OPTIONS: null/any-license, aladdin/Aladdin-Public-License, artistic/Artistic-License, apache/Apache-License, apple/Apple-Public-Source-License, bsd/BSD-License, cpl/Common-Public-License, epl/Eclipse-Public-License, agpl/GNU-Affero-General-Public-License, gpl/GNU-General-Public-License, lgpl/GNU-Lesser-General-Public-License, disclaimer/Historical-Permission-Notice-and-Disclaimer, ibm/IBM-Public-License, lucent/Lucent-Public-License, mit/MIT-License, mozilla/Mozilla-Public-License, nasa/NASA-Open-Source-Agreement, python/Python-Software-Foundation-License, qpl/Q-Public-License, sleepycat/Sleepycat-License, zope/Zope-Public-License) Case Sensitive (In Search Box: "case:no") (OPTIONS: yes, no) Also of use in understanding the search tool would be the still live FAQs page for Google Code Search. Is there any code search engine that would fully replace Google Code Search's features?

    Read the article

  • Why is HTML/Javascript minification beneficial

    - by Channel72
    Why is HTML/Javascript minification beneficial when the HTTP protocol already supports gzip data compression? I realize that Javascript/HTML minification has the potential to significantly reduce the size of Javascript/HTML files by removing unnecessary whitespace, and perhaps renaming variables to a few letters each, but doesn't the LZW algorithm do especially well when there are many repeated characters (e.g. lots of whitespace?) I realize that some Javascript minification tools do more than just reduce size. Google's closure compiler, for example, also tries to improve code performance by inlining functions and doing other analyses. But the primary purpose of Javascript minification is usually to reduce file size. I also realize there are other reasons you might want to minify aside from performace, such as code obfuscation. But again, that reason is not usually emphasized as much as performance gain and file size reduction. For example, Closure Compiler is not advertised as an obfuscation tool, but as a code size reducer and download-speed enhancer. So, how much performance do you really gain from Javascript/HTML minification when you're already significantly reducing file size with gzip compression?

    Read the article

  • 10 Windows Tweaking Myths Debunked

    - by Chris Hoffman
    Windows is big, complicated, and misunderstood. You’ll still stumble across bad advice from time to time when browsing the web. These Windows tweaking, performance, and system maintenance tips are mostly just useless, but some are actively harmful. Luckily, most of these myths have been stomped out on mainstream sites and forums. However, if you start searching the web, you’ll still find websites that recommend you do these things. Erase Cache Files Regularly to Speed Things Up You can free up disk space by running an application like CCleaner, another temporary-file-cleaning utility, or even the Windows Disk Cleanup tool. In some cases, you may even see an old computer speed up when you erase a large amount of useless files. However, running CCleaner or similar utilities every day to erase your browser’s cache won’t actually speed things up. It will slow down your web browsing as your web browser is forced to redownload the files all over again, and reconstruct the cache you regularly delete. If you’ve installed CCleaner or a similar program and run it every day with the default settings, you’re actually slowing down your web browsing. Consider at least preventing the program from wiping out your web browser cache. Enable ReadyBoost to Speed Up Modern PCs Windows still prompts you to enable ReadyBoost when you insert a USB stick or memory card. On modern computers, this is completely pointless — ReadyBoost won’t actually speed up your computer if you have at least 1 GB of RAM. If you have a very old computer with a tiny amount of RAM — think 512 MB — ReadyBoost may help a bit. Otherwise, don’t bother. Open the Disk Defragmenter and Manually Defragment On Windows 98, users had to manually open the defragmentation tool and run it, ensuring no other applications were using the hard drive while it did its work. Modern versions of Windows are capable of defragmenting your file system while other programs are using it, and they automatically defragment your disks for you. If you’re still opening the Disk Defragmenter every week and clicking the Defragment button, you don’t need to do this — Windows is doing it for you unless you’ve told it not to run on a schedule. Modern computers with solid-state drives don’t have to be defragmented at all. Disable Your Pagefile to Increase Performance When Windows runs out of empty space in RAM, it swaps out data from memory to a pagefile on your hard disk. If a computer doesn’t have much memory and it’s running slow, it’s probably moving data to the pagefile or reading data from it. Some Windows geeks seem to think that the pagefile is bad for system performance and disable it completely. The argument seems to be that Windows can’t be trusted to manage a pagefile and won’t use it intelligently, so the pagefile needs to be removed. As long as you have enough RAM, it’s true that you can get by without a pagefile. However, if you do have enough RAM, Windows will only use the pagefile rarely anyway. Tests have found that disabling the pagefile offers no performance benefit. Enable CPU Cores in MSConfig Some websites claim that Windows may not be using all of your CPU cores or that you can speed up your boot time by increasing the amount of cores used during boot. They direct you to the MSConfig application, where you can indeed select an option that appears to increase the amount of cores used. In reality, Windows always uses the maximum amount of processor cores your CPU has. (Technically, only one core is used at the beginning of the boot process, but the additional cores are quickly activated.) Leave this option unchecked. It’s just a debugging option that allows you to set a maximum number of cores, so it would be useful if you wanted to force Windows to only use a single core on a multi-core system — but all it can do is restrict the amount of cores used. Clean Your Prefetch To Increase Startup Speed Windows watches the programs you run and creates .pf files in its Prefetch folder for them. The Prefetch feature works as a sort of cache — when you open an application, Windows checks the Prefetch folder, looks at the application’s .pf file (if it exists), and uses that as a guide to start preloading data that the application will use. This helps your applications start faster. Some Windows geeks have misunderstood this feature. They believe that Windows loads these files at boot, so your boot time will slow down due to Windows preloading the data specified in the .pf files. They also argue you’ll build up useless files as you uninstall programs and .pf files will be left over. In reality, Windows only loads the data in these .pf files when you launch the associated application and only stores .pf files for the 128 most recently launched programs. If you were to regularly clean out the Prefetch folder, not only would programs take longer to open because they won’t be preloaded, Windows will have to waste time recreating all the .pf files. You could also modify the PrefetchParameters setting to disable Prefetch, but there’s no reason to do that. Let Windows manage Prefetch on its own. Disable QoS To Increase Network Bandwidth Quality of Service (QoS) is a feature that allows your computer to prioritize its traffic. For example, a time-critical application like Skype could choose to use QoS and prioritize its traffic over a file-downloading program so your voice conversation would work smoothly, even while you were downloading files. Some people incorrectly believe that QoS always reserves a certain amount of bandwidth and this bandwidth is unused until you disable it. This is untrue. In reality, 100% of bandwidth is normally available to all applications unless a program chooses to use QoS. Even if a program does choose to use QoS, the reserved space will be available to other programs unless the program is actively using it. No bandwidth is ever set aside and left empty. Set DisablePagingExecutive to Make Windows Faster The DisablePagingExecutive registry setting is set to 0 by default, which allows drivers and system code to be paged to the disk. When set to 1, drivers and system code will be forced to stay resident in memory. Once again, some people believe that Windows isn’t smart enough to manage the pagefile on its own and believe that changing this option will force Windows to keep important files in memory rather than stupidly paging them out. If you have more than enough memory, changing this won’t really do anything. If you have little memory, changing this setting may force Windows to push programs you’re using to the page file rather than push unused system files there — this would slow things down. This is an option that may be helpful for debugging in some situations, not a setting to change for more performance. Process Idle Tasks to Free Memory Windows does things, such as creating scheduled system restore points, when you step away from your computer. It waits until your computer is “idle” so it won’t slow your computer and waste your time while you’re using it. Running the “Rundll32.exe advapi32.dll,ProcessIdleTasks” command forces Windows to perform all of these tasks while you’re using the computer. This is completely pointless and won’t help free memory or anything like that — all you’re doing is forcing Windows to slow your computer down while you’re using it. This command only exists so benchmarking programs can force idle tasks to run before performing benchmarks, ensuring idle tasks don’t start running and interfere with the benchmark. Delay or Disable Windows Services There’s no real reason to disable Windows services anymore. There was a time when Windows was particularly heavy and computers had little memory — think Windows Vista and those “Vista Capable” PCs Microsoft was sued over. Modern versions of Windows like Windows 7 and 8 are lighter than Windows Vista and computers have more than enough memory, so you won’t see any improvements from disabling system services included with Windows. Some people argue for not disabling services, however — they recommend setting services from “Automatic” to “Automatic (Delayed Start)”. By default, the Delayed Start option just starts services two minutes after the last “Automatic” service starts. Setting services to Delayed Start won’t really speed up your boot time, as the services will still need to start — in fact, it may lengthen the time it takes to get a usable desktop as services will still be loading two minutes after booting. Most services can load in parallel, and loading the services as early as possible will result in a better experience. The “Delayed Start” feature is primarily useful for system administrators who need to ensure a specific service starts later than another service. If you ever find a guide that recommends you set a little-known registry setting to improve performance, take a closer look — the change is probably useless. Want to actually speed up your PC? Try disabling useless startup programs that run on boot, increasing your boot time and consuming memory in the background. This is a much better tip than doing any of the above, especially considering most Windows PCs come packed to the brim with bloatware.     

    Read the article

  • Oracle Enterprise Manager sessions on the last day of the Oracle Open World

    - by Anand Akela
    Hope you had a very productive Oracle Open World so far . Hopefully, many of you attended the customer appreciation event yesterday night at the Treasures Islands.   We still have many enterprise manager related sessions today on Thursday, last day of Oracle Open World 2012. Download the Oracle Enterprise Manager 12c OpenWorld schedule (PDF) Oracle Enterprise Manager Cloud Control 12c (and Private Cloud) Time Title Location 11:15 AM - 12:15 PM Application Performance Matters: Oracle Real User Experience Insight Palace Hotel - Sea Cliff 11:15 AM - 12:15 PM Advanced Management of JD Edwards EnterpriseOne with Oracle Enterprise Manager InterContinental - Grand Ballroom B 11:15 AM - 12:15 PM Spark on SPARC Servers: Enterprise-Class IaaS with Oracle Enterprise Manager 12c Moscone West - 3018 11:15 AM - 12:15 PM Pinpoint Production Applications’ Performance Bottlenecks by Using JVM Diagnostics Marriott Marquis - Golden Gate C3 11:15 AM - 12:15 PM Bringing Order to the Masses: Scalable Monitoring with Oracle Enterprise Manager 12c Moscone West - 3020 12:45 PM - 1:45 PM Improving the Performance of Oracle E-Business Suite Applications: Tips from a DBA’s Diary Moscone West - 2018 12:45 PM - 1:45 PM Advanced Management of Oracle PeopleSoft with Oracle Enterprise Manager Moscone West - 3009 12:45 PM - 1:45 PM Managing Sun Servers and Oracle Engineered Systems with Oracle Enterprise Manager Moscone West - 2000 12:45 PM - 1:45 PM Strategies for Configuring Oracle Enterprise Manager 12c in a Secure IT Environment Moscone West - 3018 12:45 PM - 1:45 PM Using Oracle Enterprise Manager 12c to Control Operational Costs Moscone South - 308 2:15 PM - 3:15 PM My Oracle Support: The Proactive 24/7 Assistant for Your Oracle Installations Moscone West - 3018 2:15 PM - 3:15 PM Functional and Load Testing Tips and Techniques for Advanced Testers Moscone South - 307 2:15 PM - 3:15 PM Oracle Enterprise Manager Deployment Best Practices Moscone South - 104 Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

< Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >