Search Results

Search found 4690 results on 188 pages for 'ran gualberto'.

Page 15/188 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Uninstalling Reporting Server 2008 on Windows Server 2008

    - by Piotr Rodak
    Ha. I had quite disputable pleasure of installing and reinstalling and reinstalling and reinstalling – I think about 5 times before it worked – Reporting Server 2008 on Windows Server with the same year number in name. During my struggle I came across an error which seems to be not quite unfamiliar to some more unfortunate developers and admins who happen to uninstall SSRS 2008 from the server. I had the SSRS 2008 installed as named instance, SQL2008. I wanted to uninstall the server and install it to default instance. And this is when it bit me – not the first time and not the last that day . The setup complained that it couldn’t access a DLL: Error message: TITLE: Microsoft SQL Server 2008 Setup ------------------------------ The following error has occurred: Access to the path 'C:\Windows\SysWOW64\perf-ReportServer$SQL2008-rsctr.dll' is denied. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=10.0.1600.22&EvtType=0x60797DC7%25400x84E8D3C0 ------------------------------ BUTTONS: OK This is a screenshot that shows the above error: This issue seems to have a bit of literature dedicated to it and even seemingly a KB article http://support.microsoft.com/kb/956173 and a similar Connect item: http://connect.microsoft.com/SQLServer/feedback/details/363653/error-messages-when-upgrading-from-sql-2008-rc0-to-rtm The article describes issue as following: When you try to uninstall Microsoft SQL Server 2008 Reporting Services from the server, you may receive the following error message: An error has occurred: Access to the path 'Drive_Letter:\WINDOWS\system32\perf-ReportServer-rsctr.dll' is denied. Note Drive_Letter refers to the disc drive into which the SQL Server installation media is inserted. In my case, the Note was not true; the error pointed to a dll that was located in Windows folder on C:\, not where the installation media were. Despite this difference I tried to identify any processes that might be keeping lock on the dll. I downloaded Sysinternals process explorer and ran it to find any processes I could stop. Unfortunately, there was no such process. I tried to rerun the installation, but it failed at the same step. Eventually I decided to remove the dll before the setup was executed. I changed name of the dll to be able to restore it in case of some issues. Interestingly, Windows let me do it, which means that indeed, it was not locked by any process. I ran the setup and this time it uninstalled the instance without any problems:   To summarize my experience I should say – be very careful, don’t leave any leftovers after uninstallation – remove/rename any folders that are left after setup has finished. For some reason, setup doesn’t remove folders and certain files. Installation on Windows Server 2008 requires more attention than on Windows 2003 because of the changed security model, some actions can be executed only by administrator in elevated execution mode. In general, you have to get used to UAC and a bit different experience than with Windows Server 2003. Technorati Tags: SQL Server 2008,Windows Server 2008,SRS,Reporting Services

    Read the article

  • .NET 4.5 is an in-place replacement for .NET 4.0

    - by Rick Strahl
    With the betas for .NET 4.5 and Visual Studio 11 and Windows 8 shipping many people will be installing .NET 4.5 and hacking away on it. There are a number of great enhancements that are fairly transparent, but it's important to understand what .NET 4.5 actually is in terms of the CLR running on your machine. When .NET 4.5 is installed it effectively replaces .NET 4.0 on the machine. .NET 4.0 gets overwritten by a new version of .NET 4.5 which - according to Microsoft - is supposed to be 100% backwards compatible. While 100% backwards compatible sounds great, we all know that 100% is a hard number to hit, and even the aforementioned blog post at the Microsoft site acknowledges this. But there's so much more than backwards compatibility that makes this awkward at best and confusing at worst. What does ‘Replacement’ mean? When you install .NET 4.5 your .NET 4.0 assemblies in the \Windows\.NET Framework\V4.0.30319 are overwritten with a new set of assemblies. You end up with overwritten assemblies as well as a bunch of new ones (like the new System.Net.Http assemblies for example). The following screen shot demonstrates system.dll on my test machine (left) running .NET 4.5 on the right and my production laptop running stock .NET 4.0 (right):   Clearly they are different files with a difference in file sizes (interesting that the 4.5 version is actually smaller). That’s not all. If you actually query the runtime version when .NET 4.5 is installed with with Environment.Version you still get: 4.0.30319 If you open the properties of System.dll assembly in .NET 4.5 you'll also see: Notice that the file version is also left at 4.0.xxx. There are differences in build numbers: .NET 4.0 shows 261 and the current .NET 4.5 beta build is 17379. I suppose you can use assume a build number greater than 17000 is .NET 4.5, but that's pretty hokey to say the least. There’s no easy or obvious way to tell whether you are running on 4.0 or 4.5 – to the application they appear to be the same runtime version. And that is what Microsoft intends here. .NET 4.5 is intended as an in-place upgrade. Compile to 4.5 run on 4.0 – not quite! You can compile an application for .NET 4.5 and run it on the 4.0 runtime – that is until you hit a new feature that doesn’t exist on 4.0. At which point the app bombs at runtime. Say you write some code that is mostly .NET 4.0, but only has a few of the new features of .NET 4.5 like aync/await buried deep in the bowels of the application where it only fires occasionally. .NET will happily start your application and run everything 4.0 fine, until it hits that 4.5 code – and then crash unceremoniously at runtime. Oh joy! You can .NET 4.0 applications on .NET 4.5 of course and that should work without much fanfare. Different than .NET 3.0/3.5 Note that this in-place replacement is very different from the side by side installs of .NET 2.0 and 3.0/3.5 which all ran on the 2.0 version of the CLR. The two 3.x versions were basically library enhancements on top of the core .NET 2.0 runtime. Both versions ran under the .NET 2.0 runtime which wasn’t changed (other than for security patches and bug fixes) for the whole 3.x cycle. The 4.5 update instead completely replaces the .NET 4.0 runtime and leaves the actual version number set at v4.0.30319. When you build a new project with Visual Studio 2011, you can still target .NET 4.0 or you can target .NET 4.5. But you are in effect referencing the same set of assemblies for both regardless which version you use. What's different is the compiler used to compile and link your code so compiling with .NET 4.0 gives you just the subset of the functionality that is available in .NET 4.0, but when you use the 4.5 compiler you get the full functionality of what’s actually available in the assemblies and extra libraries. It doesn’t look like you will be able to use Visual Studio 2010 to develop .NET 4.5 applications. Good news – Bad news Microsoft is trying hard to experiment with every possible permutation of releasing new versions of the .NET framework apparently. No two updates have been the same. Clearly updating to a full new version of .NET (ie. .NET 2.0, 4.0 and at some point 5.0 runtimes) has its own set of challenges, but doing an in-place update of the runtime and then not even providing a good way to tell which version is installed is pretty whacky even by Microsoft’s standards. Especially given that .NET 4.5 includes a fairly significant update with all the aysnc functionality baked into the runtime. Most of the IO APIs have been updated to support task based async operation which significantly affects many existing APIs. To make things worse .NET 4.5 will be the initial version of .NET that ships with Windows 8 so it will be with us for a long time to come unless Microsoft finally decides to push .NET versions onto Windows machines as part of system upgrades (which currently doesn’t happen). This is the same story we had when Vista launched with .NET 3.0 which was a minor version that quickly was replaced by 3.5 which was more long lived and practical. People had enough problems dealing with the confusing versioning of the 3.x versions which ran on .NET 2.0. I can’t count the amount support calls and questions I’ve fielded because people couldn’t find a .NET 3.5 entry in the IIS version dialog. The same is likely to happen with .NET 4.5. It’s all well and good when we know that .NET 4.5 is an in-place replacement, but administrators and IT folks not intimately familiar with .NET are unlikely to understand this nuance and end up thoroughly confused which version is installed. It’s hard for me to see any upside to an in-place update and I haven’t really seen a good explanation of why this approach was decided on. Sure if the version stays the same existing assembly bindings don’t break so applications can stay running through an update. I suppose this is useful for some component vendors and strongly signed assemblies in corporate environments. But seriously, if you are going to throw .NET 4.5 into the mix, who won’t be recompiling all code and thoroughly test that code to work on .NET 4.5? A recompile requirement doesn’t seem that serious in light of a major version upgrade.  Resources http://blogs.msdn.com/b/dotnet/archive/2011/09/26/compatibility-of-net-framework-4-5.aspx http://www.devproconnections.com/article/net-framework/net-framework-45-versioning-faces-problems-141160© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Packing a DBF

    - by Tom Hines
    I thought my days of dealing with DBFs as a "production data" source were over, but HA (no such luck). I recently had to retrieve, modify and replace some data that needed to be delivered in a DBF file. Everything was fine until I realized / remembered the DBF driver does not ACTUALLY delete records from the data source -- it only marks them for deletion.  You are responsible for handling the "chaff" either by using a utility to remove deleted records or by simply ignoring them.  If imported into Excel, the marked-deleted records are ignored, but the file size will reflect the extra content. So, I went hunting for a method to "Pack" the records (removing deleted ones and resizing the DBF file) and eventually ran across the FOXPRO driver at ( http://msdn.microsoft.com/en-us/vfoxpro/bb190233.aspx ).  Once installed, I changed the DSN in the code to the new one I created in the ODBC Administrator and ran some tests.  Using MSQuery, I simply tested the raw SQL command Pack {tablename} and it WORKED! One really neat thing is the PACK command is used like regular SQL instructions; "Pack {tablename}" is all that is needed. It is necessary, however, to close all connections to the database before issuing the PACK command.    Here is some C# code for a Pack method.         /// <summary>       /// Pack the DBF removing all deleted records       /// </summary>       /// <param name="strTableName">The table to pack</param>       /// <param name="strError">output of any errors</param>       /// <returns>bool (true if no errors)</returns>       public static bool Pack(string strTableName, ref string strError)       {          bool blnRetVal = true;          try          {             OdbcConnectionStringBuilder csbOdbc = new OdbcConnectionStringBuilder()             {                Dsn = "PSAP_FOX_DBF"             };             string strSQL = "pack " + strTableName;             using (OdbcConnection connOdbc = new OdbcConnection(csbOdbc.ToString()))             {                connOdbc.Open();                OdbcCommand cmdOdbc = new OdbcCommand(strSQL, connOdbc);                cmdOdbc.ExecuteNonQuery();                connOdbc.Close();             }          }          catch (Exception exc)          {             blnRetVal = false;             strError = exc.Message;          }          return blnRetVal;       }

    Read the article

  • Packing a DBF

    - by Tom Hines
    I thought my days of dealing with DBFs as a "production data" source were over, but HA (no such luck). I recently had to retrieve, modify and replace some data that needed to be delivered in a DBF file. Everything was fine until I realized / remembered the DBF driver does not ACTUALLY delete records from the data source -- it only marks them for deletion.  You are responsible for handling the "chaff" either by using a utility to remove deleted records or by simply ignoring them.  If imported into Excel, the marked-deleted records are ignored, but the file size will reflect the extra content.  After several rounds of testing CRUD, the output DBF was huge. So, I went hunting for a method to "Pack" the records (removing deleted ones and resizing the DBF file) and eventually ran across the FOXPRO driver at ( http://msdn.microsoft.com/en-us/vfoxpro/bb190233.aspx ).  Once installed, I changed the DSN in the code to the new one I created in the ODBC Administrator and ran some tests.  Using MSQuery, I simply tested the raw SQL command Pack {tablename} and it WORKED! One really neat thing is the PACK command is used like regular SQL instructions; "Pack {tablename}" is all that is needed. It is necessary, however, to close all connections to the database (and re-open) before issuing the PACK command or you will get the "File is in use" error.    Here is some C# code for a Pack method.         /// <summary>       /// Pack the DBF removing all deleted records       /// </summary>       /// <param name="strTableName">The table to pack</param>       /// <param name="strError">output of any errors</param>       /// <returns>bool (true if no errors)</returns>       public static bool Pack(string strTableName, ref string strError)       {          bool blnRetVal = true;          try          {             OdbcConnectionStringBuilder csbOdbc = new OdbcConnectionStringBuilder()             {                Dsn = "PSAP_FOX_DBF"             };             string strSQL = "pack " + strTableName;             using (OdbcConnection connOdbc = new OdbcConnection(csbOdbc.ToString()))             {                connOdbc.Open();                OdbcCommand cmdOdbc = new OdbcCommand(strSQL, connOdbc);                cmdOdbc.ExecuteNonQuery();                connOdbc.Close();             }          }          catch (Exception exc)          {             blnRetVal = false;             strError = exc.Message;          }          return blnRetVal;       }

    Read the article

  • An XEvent a Day (16 of 31) – How Many Checkpoints are Issued During a Full Backup?

    - by Jonathan Kehayias
    This wasn’t my intended blog post for today, but last night a question came across #SQLHelp on Twitter from Varun ( Twitter ). #sqlhelp how many checkpoints are issued during a full backup? The question was answered by Robert Davis (Blog|Twitter) as: Just 1, at the very start. RT @ 1sql : #sqlhelp how many checkpoints are issued during a full backup? This seemed like a great thing to test out with Extended Events so I ran through the available Events in SQL Server 2008, and the only Event related...(read more)

    Read the article

  • ubuntu-12.04-wubi-i386.tar.xz for the wubi installer

    - by Alejandro
    I ran the wubi installer (from an extracted ubuntu ISO) and it downloads ubuntu-12.04-wubi-i386.tar.xz but it's slow and non-resumable so I cancelled it, found a mirror of the file online and downloaded it using Intenet download manager. Where should I be placing ubuntu-12.04-wubi-i386.tar.xz file so the wubi installer won't have to download that file anymore? Thank you. Update: I extracted the archive and there are two files, I'm afraid I do not know where to place them.

    Read the article

  • Replication Services as ETL extraction tool

    - by jorg
    In my last blog post I explained the principles of Replication Services and the possibilities it offers in a BI environment. One of the possibilities I described was the use of snapshot replication as an ETL extraction tool: “Snapshot Replication can also be useful in BI environments, if you don’t need a near real-time copy of the database, you can choose to use this form of replication. Next to an alternative for Transactional Replication it can be used to stage data so it can be transformed and moved into the data warehousing environment afterwards. In many solutions I have seen developers create multiple SSIS packages that simply copies data from one or more source systems to a staging database that figures as source for the ETL process. The creation of these packages takes a lot of (boring) time, while Replication Services can do the same in minutes. It is possible to filter out columns and/or records and it can even apply schema changes automatically so I think it offers enough features here. I don’t know how the performance will be and if it really works as good for this purpose as I expect, but I want to try this out soon!” Well I have tried it out and I must say it worked well. I was able to let replication services do work in a fraction of the time it would cost me to do the same in SSIS. What I did was the following: Configure snapshot replication for some Adventure Works tables, this was quite simple and straightforward. Create an SSIS package that executes the snapshot replication on demand and waits for its completion. This is something that you can’t do with out of the box functionality. While configuring the snapshot replication two SQL Agent Jobs are created, one for the creation of the snapshot and one for the distribution of the snapshot. Unfortunately these jobs are  asynchronous which means that if you execute them they immediately report back if the job started successfully or not, they do not wait for completion and report its result afterwards. So I had to create an SSIS package that executes the jobs and waits for their completion before the rest of the ETL process continues. Fortunately I was able to create the SSIS package with the desired functionality. I have made a step-by-step guide that will help you configure the snapshot replication and I have uploaded the SSIS package you need to execute it. Configure snapshot replication   The first step is to create a publication on the database you want to replicate. Connect to SQL Server Management Studio and right-click Replication, choose for New.. Publication…   The New Publication Wizard appears, click Next Choose your “source” database and click Next Choose Snapshot publication and click Next   You can now select tables and other objects that you want to publish Expand Tables and select the tables that are needed in your ETL process In the next screen you can add filters on the selected tables which can be very useful. Think about selecting only the last x days of data for example. Its possible to filter out rows and/or columns. In this example I did not apply any filters. Schedule the Snapshot Agent to run at a desired time, by doing this a SQL Agent Job is created which we need to execute from a SSIS package later on. Next you need to set the Security Settings for the Snapshot Agent. Click on the Security Settings button.   In this example I ran the Agent under the SQL Server Agent service account. This is not recommended as a security best practice. Fortunately there is an excellent article on TechNet which tells you exactly how to set up the security for replication services. Read it here and make sure you follow the guidelines!   On the next screen choose to create the publication at the end of the wizard Give the publication a name (SnapshotTest) and complete the wizard   The publication is created and the articles (tables in this case) are added Now the publication is created successfully its time to create a new subscription for this publication.   Expand the Replication folder in SSMS and right click Local Subscriptions, choose New Subscriptions   The New Subscription Wizard appears   Select the publisher on which you just created your publication and select the database and publication (SnapshotTest)   You can now choose where the Distribution Agent should run. If it runs at the distributor (push subscriptions) it causes extra processing overhead. If you use a separate server for your ETL process and databases choose to run each agent at its subscriber (pull subscriptions) to reduce the processing overhead at the distributor. Of course we need a database for the subscription and fortunately the Wizard can create it for you. Choose for New database   Give the database the desired name, set the desired options and click OK You can now add multiple SQL Server Subscribers which is not necessary in this case but can be very useful.   You now need to set the security settings for the Distribution Agent. Click on the …. button Again, in this example I ran the Agent under the SQL Server Agent service account. Read the security best practices here   Click Next   Make sure you create a synchronization job schedule again. This job is also necessary in the SSIS package later on. Initialize the subscription at first synchronization Select the first box to create the subscription when finishing this wizard Complete the wizard by clicking Finish The subscription will be created In SSMS you see a new database is created, the subscriber. There are no tables or other objects in the database available yet because the replication jobs did not ran yet. Now expand the SQL Server Agent, go to Jobs and search for the job that creates the snapshot:   Rename this job to “CreateSnapshot” Now search for the job that distributes the snapshot:   Rename this job to “DistributeSnapshot” Create an SSIS package that executes the snapshot replication We now need an SSIS package that will take care of the execution of both jobs. The CreateSnapshot job needs to execute and finish before the DistributeSnapshot job runs. After the DistributeSnapshot job has started the package needs to wait until its finished before the package execution finishes. The Execute SQL Server Agent Job Task is designed to execute SQL Agent Jobs from SSIS. Unfortunately this SSIS task only executes the job and reports back if the job started succesfully or not, it does not report if the job actually completed with success or failure. This is because these jobs are asynchronous. The SSIS package I’ve created does the following: It runs the CreateSnapshot job It checks every 5 seconds if the job is completed with a for loop When the CreateSnapshot job is completed it starts the DistributeSnapshot job And again it waits until the snapshot is delivered before the package will finish successfully Quite simple and the package is ready to use as standalone extract mechanism. After executing the package the replicated tables are added to the subscriber database and are filled with data:   Download the SSIS package here (SSIS 2008) Conclusion In this example I only replicated 5 tables, I could create a SSIS package that does the same in approximately the same amount of time. But if I replicated all the 70+ AdventureWorks tables I would save a lot of time and boring work! With replication services you also benefit from the feature that schema changes are applied automatically which means your entire extract phase wont break. Because a snapshot is created using the bcp utility (bulk copy) it’s also quite fast, so the performance will be quite good. Disadvantages of using snapshot replication as extraction tool is the limitation on source systems. You can only choose SQL Server or Oracle databases to act as a publisher. So if you plan to build an extract phase for your ETL process that will invoke a lot of tables think about replication services, it would save you a lot of time and thanks to the Extract SSIS package I’ve created you can perfectly fit it in your usual SSIS ETL process.

    Read the article

  • SPARC T3-1 Record Results Running JD Edwards EnterpriseOne Day in the Life Benchmark with Added Batch Component

    - by Brian
    Using Oracle's SPARC T3-1 server for the application tier and Oracle's SPARC Enterprise M3000 server for the database tier, a world record result was produced running the Oracle's JD Edwards EnterpriseOne applications Day in the Life benchmark run concurrently with a batch workload. The SPARC T3-1 server based result has 25% better performance than the IBM Power 750 POWER7 server even though the IBM result did not include running a batch component. The SPARC T3-1 server based result has 25% better space/performance than the IBM Power 750 POWER7 server as measured by the online component. The SPARC T3-1 server based result is 5x faster than the x86-based IBM x3650 M2 server system when executing the online component of the JD Edwards EnterpriseOne 9.0.1 Day in the Life benchmark. The IBM result did not include a batch component. The SPARC T3-1 server based result has 2.5x better space/performance than the x86-based IBM x3650 M2 server as measured by the online component. The combination of SPARC T3-1 and SPARC Enterprise M3000 servers delivered a Day in the Life benchmark result of 5000 online users with 0.875 seconds of average transaction response time running concurrently with 19 Universal Batch Engine (UBE) processes at 10 UBEs/minute. The solution exercises various JD Edwards EnterpriseOne applications while running Oracle WebLogic Server 11g Release 1 and Oracle Web Tier Utilities 11g HTTP server in Oracle Solaris Containers, together with the Oracle Database 11g Release 2. The SPARC T3-1 server showed that it could handle the additional workload of batch processing while maintaining the same number of online users for the JD Edwards EnterpriseOne Day in the Life benchmark. This was accomplished with minimal loss in response time. JD Edwards EnterpriseOne 9.0.1 takes advantage of the large number of compute threads available in the SPARC T3-1 server at the application tier and achieves excellent response times. The SPARC T3-1 server consolidates the application/web tier of the JD Edwards EnterpriseOne 9.0.1 application using Oracle Solaris Containers. Containers provide flexibility, easier maintenance and better CPU utilization of the server leaving processing capacity for additional growth. A number of Oracle advanced technology and features were used to obtain this result: Oracle Solaris 10, Oracle Solaris Containers, Oracle Java Hotspot Server VM, Oracle WebLogic Server 11g Release 1, Oracle Web Tier Utilities 11g, Oracle Database 11g Release 2, the SPARC T3 and SPARC64 VII+ based servers. This is the first published result running both online and batch workload concurrently on the JD Enterprise Application server. No published results are available from IBM running the online component together with a batch workload. The 9.0.1 version of the benchmark saw some minor performance improvements relative to 9.0. When comparing between 9.0.1 and 9.0 results, the reader should take this into account when the difference between results is small. Performance Landscape JD Edwards EnterpriseOne Day in the Life Benchmark Online with Batch Workload This is the first publication on the Day in the Life benchmark run concurrently with batch jobs. The batch workload was provided by Oracle's Universal Batch Engine. System RackUnits Online Users Resp Time (sec) BatchConcur(# of UBEs) BatchRate(UBEs/m) Version SPARC T3-1, 1xSPARC T3 (1.65 GHz), Solaris 10 M3000, 1xSPARC64 VII+ (2.86 GHz), Solaris 10 4 5000 0.88 19 10 9.0.1 Resp Time (sec) — Response time of online jobs reported in seconds Batch Concur (# of UBEs) — Batch concurrency presented in the number of UBEs Batch Rate (UBEs/m) — Batch transaction rate in UBEs/minute. JD Edwards EnterpriseOne Day in the Life Benchmark Online Workload Only These results are for the Day in the Life benchmark. They are run without any batch workload. System RackUnits Online Users ResponseTime (sec) Version SPARC T3-1, 1xSPARC T3 (1.65 GHz), Solaris 10 M3000, 1xSPARC64 VII (2.75 GHz), Solaris 10 4 5000 0.52 9.0.1 IBM Power 750, 1xPOWER7 (3.55 GHz), IBM i7.1 4 4000 0.61 9.0 IBM x3650M2, 2xIntel X5570 (2.93 GHz), OVM 2 1000 0.29 9.0 IBM result from http://www-03.ibm.com/systems/i/advantages/oracle/, IBM used WebSphere Configuration Summary Hardware Configuration: 1 x SPARC T3-1 server 1 x 1.65 GHz SPARC T3 128 GB memory 16 x 300 GB 10000 RPM SAS 1 x Sun Flash Accelerator F20 PCIe Card, 92 GB 1 x 10 GbE NIC 1 x SPARC Enterprise M3000 server 1 x 2.86 SPARC64 VII+ 64 GB memory 1 x 10 GbE NIC 2 x StorageTek 2540 + 2501 Software Configuration: JD Edwards EnterpriseOne 9.0.1 with Tools 8.98.3.3 Oracle Database 11g Release 2 Oracle 11g WebLogic server 11g Release 1 version 10.3.2 Oracle Web Tier Utilities 11g Oracle Solaris 10 9/10 Mercury LoadRunner 9.10 with Oracle Day in the Life kit for JD Edwards EnterpriseOne 9.0.1 Oracle’s Universal Batch Engine - Short UBEs and Long UBEs Benchmark Description JD Edwards EnterpriseOne is an integrated applications suite of Enterprise Resource Planning (ERP) software. Oracle offers 70 JD Edwards EnterpriseOne application modules to support a diverse set of business operations. Oracle's Day in the Life (DIL) kit is a suite of scripts that exercises most common transactions of JD Edwards EnterpriseOne applications, including business processes such as payroll, sales order, purchase order, work order, and other manufacturing processes, such as ship confirmation. These are labeled by industry acronyms such as SCM, CRM, HCM, SRM and FMS. The kit's scripts execute transactions typical of a mid-sized manufacturing company. The workload consists of online transactions and the UBE workload of 15 short and 4 long UBEs. LoadRunner runs the DIL workload, collects the user’s transactions response times and reports the key metric of Combined Weighted Average Transaction Response time. The UBE processes workload runs from the JD Enterprise Application server. Oracle's UBE processes come as three flavors: Short UBEs < 1 minute engage in Business Report and Summary Analysis, Mid UBEs > 1 minute create a large report of Account, Balance, and Full Address, Long UBEs > 2 minutes simulate Payroll, Sales Order, night only jobs. The UBE workload generates large numbers of PDF files reports and log files. The UBE Queues are categorized as the QBATCHD, a single threaded queue for large UBEs, and the QPROCESS queue for short UBEs run concurrently. One of the Oracle Solaris Containers ran 4 Long UBEs, while another Container ran 15 short UBEs concurrently. The mixed size UBEs ran concurrently from the SPARC T3-1 server with the 5000 online users driven by the LoadRunner. Oracle’s UBE process performance metric is Number of Maximum Concurrent UBE processes at transaction rate, UBEs/minute. Key Points and Best Practices Two JD Edwards EnterpriseOne Application Servers and two Oracle Fusion Middleware WebLogic Servers 11g R1 coupled with two Oracle Fusion Middleware 11g Web Tier HTTP Server instances on the SPARC T3-1 server were hosted in four separate Oracle Solaris Containers to demonstrate consolidation of multiple application and web servers. See Also SPARC T3-1 oracle.com SPARC Enterprise M3000 oracle.com Oracle Solaris oracle.com JD Edwards EnterpriseOne oracle.com Oracle Database 11g Release 2 Enterprise Edition oracle.com Disclosure Statement Copyright 2011, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 6/27/2011.

    Read the article

  • Generate DROP statements for all extended properties

    - by jamiet
    This evening I have been attempting to migrate an existing on-premise database to SQL Azure using the wizard that is built-in to SQL Server Management Studio (SSMS). When I did so I received the following error: The following objects are not supported = [MS_Description] = Extended Property Evidently databases containing extended properties can not be migrated using this particular wizard so I set about removing all of the extended properties – unfortunately there were over a thousand of them so I needed a better way than simply deleting each and every one of them manually. I found a couple of resources online that went some way toward this: Drop all extended properties in a MSSQL database by Angelo Hongens Modifying and deleting extended properties by Adam Aspin Unfortunately neither provided a script that exactly suited my needs. Angelo’s covered extended properties on tables and columns however I had other objects that had extended properties on them. Adam’s looked more complete but when I ran it I got an error: Msg 468, Level 16, State 9, Line 78 Cannot resolve the collation conflict between "Latin1_General_100_CS_AS" and "Latin1_General_CI_AS" in the equal to operation. So, both great resources but I wasn’t able to use either on their own to get rid of all of my extended properties. Hence, I combined the excellent work that Angelo and Adam had provided in order to manufacture my own script which did successfully manage to generate calls to sp_dropextendedproperty for all of my extended properties. If you think you might be able to make use of such a script then feel free to download it from https://skydrive.live.com/redir.aspx?cid=550f681dad532637&resid=550F681DAD532637!16707&parid=550F681DAD532637!16706&authkey=!APxPIQCatzC7BQ8. This script will remove extended properties on tables, columns, check constraints, default constraints, views, sprocs, foreign keys, primary keys, table triggers, UDF parameters, sproc parameters, databases, schemas, database files and filegroups. If you have any object types with extended properties on them that are not in that list then consult Adam’s aforementioned article – it should prove very useful. I repeat here the message that I have placed at the top of the script: /* This script will generate calls to sp_dropextendedproperty for every extended property that exists in your database. Actually, a caveat: I don't promise that it will catch each and every extended property that exists, but I'm confident it will catch most of them! It is based on this: http://blog.hongens.nl/2010/02/25/drop-all-extended-properties-in-a-mssql-database/ by Angelo Hongens. Also had lots of help from this: http://www.sqlservercentral.com/articles/Metadata/72609/ by Adam Aspin Adam actually provides a script at that link to do something very similar but when I ran it I got an error: Msg 468, Level 16, State 9, Line 78 Cannot resolve the collation conflict between "Latin1_General_100_CS_AS" and "Latin1_General_CI_AS" in the equal to operation. So I put together this version instead. Use at your own risk. Jamie Thomson 2012-03-25 */ Hope this is useful to someone! @Jamiet

    Read the article

  • Updating Nvidia drivers 11.10 (nvidia 290.xxx.xxx)

    - by Jesse
    Hi I installed NVIDIA drivers using terminal. I added the newest repositories, to ensure getting the newest driver, once that was complete, I ran the following command: sudo apt-get install nvidia-current This started the download and successfully installed the (290.xx.xxx) version. The issue I have is when opening X-server application it still shows NVIDIA driver version 173.14.30. Can someone walk me through correcting this so it corresponds to the driver installed? Thanks

    Read the article

  • Keep Xubuntu Network Manager from overwriting resolv.conf

    - by leeand00
    How do I keep Xubuntu 11.10 from overwriting resolv.conf everytime I reboot my machine? Everytime I reboot, I get an overwritten resolv.conf that has the words # Generated by Network Manager and no nameservers specified. I ran the following to get rid of Network Manager, but it's still replacing my resolv.conf when I restart the machine. sudo apt-get --purge remove network-manager sudo apt-get --purge remove network-manager-gnome sudo apt-get --purge remove network-manager-pptp sudo apt-get --purge remove network-manager-pptp-gnome

    Read the article

  • how to reinstall/repair ubuntu 12.04 after dual boot installation fails with windows 7

    - by Rini
    I have installed Ubuntu 12.04 on my preinstalled windows 7 Sony vaio s series laptop following instructions here: http://www.linuxbsdos.com/2012/05/17/how-to-dual-boot-ubuntu-12-04-and-windows-7/ Everything went well and I am able to boot in to windows after complete installation of Ubuntu. Now following instructions on web I tried to add Ubuntu to my BIOS using Easy BCD (but forget to add windows 7 entry). As a result, I loose windows 7 OS and can't boot in to either OS then I successfully repaired windows 7 using recovery CD. Now my problem is that I can't reinstall Ubuntu 12.04 using Live CD it halts every time before disk partition step giving error. "ubi-partman crashed". "ubi-partman failed with exit code 141. further information may be found in /var/log/syslog. Do you want to try running this step again before continuing? If you do not, your installation may fail entirely or may be broken." and, any choice to continue will result in the same error. I looked in to /var/log/syslog but not able to understand what is error. Then, I ran sudo fdisk -l to view my partitions and it shows me only one partition. Probably, all the partitions I created for Ubuntu 12.04 are lost while running windows 7 recovery CD. So, I don't know whether the Ubuntu is still there or probably corrupted. My boot-info URL is: http://paste.ubuntu.com/1202146/ Please tell me how to remove this error so that I can reinstall/repair Ubuntu 12.04 Thanks in advance. R Shukla My boot-info URL is: http://paste.ubuntu.com/1202146/ Please tell me how to remove this error so that I can reinstall/repair Ubuntu 12.04 Thanks for your help! I tried to boot from the CD but I every time it give me error before disk partitioning step. Also, I am unable to start Gparted. "ubi-partman crashed". "ubi-partman failed with exit code 141. further information may be found in /var/log/syslog. Do you want to try running this step again before continuing? If you do not, your installation may fail entirely or may be broken." and, any choice to continue will result in the same error. I looked in to /var/log/syslog but not able to understand what is error. Then, I ran sudo fdisk -l to view my partitions and it shows me only ne partition. Probaply, all the partitions I created for Ubuntu 12.04 are lost while running windows 7 recovery CD. Please tell me how to remove this error. Best Regards, R S

    Read the article

  • Time to Check Your Servers

    - by fatherjack
    Do you know how to find the time that your SQL Server started? Since SQL Server 2008 you can use: SELECT sqlserver_start_timeFROM sys.dm_os_sys_info On one of my servers this gives me: This is great, and can be used in lots of ways. I happened across the [sys].[dm_exec_requests]view the other day and out of curiosity ran the query SELECT MIN(start_time) AS [start time]FROM [sys].[dm_exec_requests] AS der And I was surprised to see the result as: Almost exactly an hour different. Now as...(read more)

    Read the article

  • Dependency problem with mysql-server-core-5.5

    - by Tama
    When I start the Ubuntu software centre, it says I cannot do anything until the package catalog is repaired. However, repairing fails. I ran "sudo apt-get -f install" and found the problem to be: mysql-server-5.5 depends on mysql-server-core-5.5 (= 5.5.24-0ubuntu0.12.04.1); however: Version of mysql-server-core-5.5 on system is 5.5.28-0ubuntu0.12.04.2. So, the question is, how do I install that version and resolve the dependency problem?

    Read the article

  • NVIDIA x server - "sudo nvidia config" does not generate a working 'xorg.config'

    - by Mike
    I am over 18 hours deep on this challenge. I got to this point and am stuck. very stuck. Maybe you can figure it out? Ubuntu Version 12.04 LTS with all the updates installed. Problem: The default settings in "etc/X11/xorg.conf" that are generated by the "nvidia-xconfig" tool, do not allow the NVIDIA x server to connect to the driver in my "System Settings Additional Driver window". (that's how I understand it. Lots of information below). Symptoms of Problem "System Settings Additional Driver" window has drivers, but the nvidia x server cannot connect/utilize any of the 4 drivers. the drivers are activated, but not in use. When I go to "System Tools Administration NVIDIA x server settings" I get an error that basically tells me to create a default file to initialize the NVIDIA X server (screen shot below). This is the messages the terminal gives after running a "sudo nvidia-xconfig" command for the first time. It seems that the generated file by the tool i just ran is generating a bad/unusable file: If I run the "sudo nvidia-xconfig" command again, I wont get an error the second time. However when I reboot, the default file that is generated (etc/X11/xorg.conf) simply puts the screen resolution at 800 x 600 (or something big like that). When I try to go to NVIDIA x server settings I am greeted with the same screen as the screen shot as in symptom 2 (no option to change the resolution). If I try to go to "system settings display" there are no other resolutions to choose from. At this point I must delete the newly minted "xorg.conf" and reinstate the original in its place. Here are the contents of the "xorg.conf" that is generated first (the one missing required information): # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 304.88 (buildmeister@swio-display-x86-rhel47-06) Wed Mar 27 15:32:58 PDT 2013 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Hardware: I ran the "lspci|grep VGA". There results are: 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [Quadro 1000M] (rev a1) More Hardware info: Ram: 16GB CPU: Intel Core i7-2720QM @2.2GHz * 8 Other: 64 bit. This is a triple boot computer and not a VM. Attempts With Not Success on My End: 1) Tried to append the "xorg.conf" with what I perceive is missing information and obviously it didn't fly. 2) All the other stuff I tried got me to this point. 3) See if this link is helpful to you (I barely get it, but i get enough knowing that a smarter person might find this useful): http://manpages.ubuntu.com/manpages/lucid/man1/nvidia-xconfig.1.html 4) I am completely new to Linux (40 hours over past week), but not to programming. However I am very serious about changing over to Linux. When you respond (I hope someone responds...) please respond in a way that a person new to Linux can understand. 5) By the way, the reason I am in this mess is because I MUST have a second monitor running from my laptop, and "System Settings Display" doesn't recognize my second display. I know it is possible to make the second display work in my system, because when I boot from the install CD, I perform work on the native laptop monitor, but the second monitor shows a purple screen with Ubuntu in the middle, so I know the VGA port is sending a signal out. If this is too much for you to tackle please suggest an alternative method to get a second display. I don't want to go to windows but I cannot have a single display. I am really fudged here. I hope some smart person can help. Thanks in advance. Mike. **********************EDIT #1********************** More Details About Graphics Card I was asked "which brand of nvidia-card do you have exactly?" Here is what I did to provide more info (maybe relevant, maybe not, but here is everything): 1) Took my Lenovo W520 right apart to see if there is an identifier on the actual card. However I realized that if I get deep enough to take a look, the laptop "won't like it". so I put it back together. Figuring out the card this way is not an option for me right now. 2) (My computer is triple boot) I logged into Win7 and ran 'dxdiag' command. here is the screen shot: 3) I tried to look on the lenovo website for more details... but no luck. I took a look at my receipts and here is info form receipt: System Unit: W520 NVIDIA Quadro 1000M 2GB 4) In win7 I went to the NVIDIA website and used the option to have my card 'scanned' by a Java applet to determine the latest update for my card. I tried the same with Ubuntu but I can't get the applet to run. Here is the recommended driver from from the NVIDIA Applet for my card for Win7 (I hope this shines some light on the specifics of the card): Quadro/NVS/Tesla/GRID Desktop Driver Release R319 Version: 320.00 WHQL Release Date: 3.5.2013 5) Also I went on the NVIDIA driver search and looked through every possible combination of product type + product series + product to find all the combinations that yield a 1000M card. My card is: Product Type: Quadro Product Series: Quadro Series (Notebooks) Product: 1000M ***********************EDIT #2******************* Additional Symptoms Another question that generated more symptoms I previously didn't mention was: "After generating xorg.conf by nvidia-xconfig, go to additional drivers, do you see nvidia-304?" 1) I took a screen shot of the "additional drivers" right after generating xorg.conf by nvidia-xconfig. Here it is: 2) Then I did a reboot. Now Ubuntu is 600 x 800 resolution. When I logged in after the computer came up I got an error (which I always get after generating xorg.conf by nvidia-xconfig and rebooting) 3) To finally answer the question - No. There is no "NVIDIA-304" driver. Screen shot of additional drivers after generating xorg.conf by nvidia-xconfig and rebooting : At this point I revert to the original xorg.conf and delete the xorg.conf generated by Nvidia.

    Read the article

  • Ubuntu server 12.04 E: Unable to locate package noip2

    - by cesar
    I just Installed Ubuntu server 12.04 and I want to install no-ip I ran as root: sudo apt-get install noip2 and I got this: root@topcat:/var# sudo apt-get install noip2 Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package noip2 I also tried with:wget htt://www.no-ip.com/client/linux/noip-duc-linux.tar.gz but I got: wget: unable to resolve host address `www.no-ip.com' Does somebody can help me?, I am new in Ubuntu

    Read the article

  • Recover files in Ubuntu from USB stick

    - by Jon
    I have Ubuntu 11.04 running from a USB stick. I think I ran some updates in terminal and now it has a kernel panic during boot up. Is there a way I can get hold of files I saved when Ubuntu was working? I just managed to watch the booot screen and it looks as if there are some errors saying there is no room to write to disk however my USB stick has 5GB free UPDATE: Here is what I see when I plug the USB stick into Windows

    Read the article

  • Intel Atom: NVIDIA ION vs. Radeon HD 4330 Graphics

    <b>Phoronix:</b> "Before devoting this hardware to the farm, we ran a few benchmarks comparing the performance of NVIDIA's ION GeForce 9400M graphics processor to the ATI Radeon HD 4330 graphics processor found on the MSI 6667BB-004US and several other Atom-powered devices."

    Read the article

  • Dell Studio 1737 Overheating

    - by Sean
    I am using a Dell Studio 1737 laptop. I have been running Linux and have ran Windows recently for a very long time. I upgraded to the 10.10 distribution and since that distro, it seems that for some reason all Linuxes want to push my laptop to extremes. I have recently upgraded to Ubuntu 12.04 since I heart that it contains kernel fixes for overheating issues. 12.04 will actually eventually cool the system, but that is after the fans run to the point it sounds like a jet aircraft taking off and the laptop makes my hands sweat. In trying to combat the heat problems I have done the following: I installed the propriatery driver for my ATI Mobility HD 3600. I have tried both the one in the Additional Drivers and also tried ATI's latest greatest version. If I don't install this my laptop will overheat and shut off in minutes. Both seem to perform similarly, but the heat problem remains. I have tried limiting the CPU by installing the CPUFreq Indicator. This does help keep the machine from shutting off, but the heat is still uncomfortable to be around the machine. I usually run in power saver mode or run the cpu at 1.6 GHZ just to error on safety. I ran sensors-detect and here are the results: sean@sean-Studio-1737:~$ sudo sensors-detect # sensors-detect revision 5984 (2011-07-10 21:22:53 +0200) # System: Dell Inc. Studio 1737 (laptop) # Board: Dell Inc. 0F237N This program will help you determine which kernel modules you need to load to use lm_sensors most effectively. It is generally safe and recommended to accept the default answers to all questions, unless you know what you're doing. Some south bridges, CPUs or memory controllers contain embedded sensors. Do you want to scan for them? This is totally safe. (YES/no): y Module cpuid loaded successfully. Silicon Integrated Systems SIS5595... No VIA VT82C686 Integrated Sensors... No VIA VT8231 Integrated Sensors... No AMD K8 thermal sensors... No AMD Family 10h thermal sensors... No AMD Family 11h thermal sensors... No AMD Family 12h and 14h thermal sensors... No AMD Family 15h thermal sensors... No AMD Family 15h power sensors... No Intel digital thermal sensor... Success! (driver `coretemp') Intel AMB FB-DIMM thermal sensor... No VIA C7 thermal sensor... No VIA Nano thermal sensor... No Some Super I/O chips contain embedded sensors. We have to write to standard I/O ports to probe them. This is usually safe. Do you want to scan for Super I/O sensors? (YES/no): y Probing for Super-I/O at 0x2e/0x2f Trying family `National Semiconductor/ITE'... No Trying family `SMSC'... No Trying family `VIA/Winbond/Nuvoton/Fintek'... No Trying family `ITE'... No Probing for Super-I/O at 0x4e/0x4f Trying family `National Semiconductor/ITE'... Yes Found `ITE IT8512E/F/G Super IO' (but not activated) Some hardware monitoring chips are accessible through the ISA I/O ports. We have to write to arbitrary I/O ports to probe them. This is usually safe though. Yes, you do have ISA I/O ports even if you do not have any ISA slots! Do you want to scan the ISA I/O ports? (YES/no): y Probing for `National Semiconductor LM78' at 0x290... No Probing for `National Semiconductor LM79' at 0x290... No Probing for `Winbond W83781D' at 0x290... No Probing for `Winbond W83782D' at 0x290... No Lastly, we can probe the I2C/SMBus adapters for connected hardware monitoring devices. This is the most risky part, and while it works reasonably well on most systems, it has been reported to cause trouble on some systems. Do you want to probe the I2C/SMBus adapters now? (YES/no): y Using driver `i2c-i801' for device 0000:00:1f.3: Intel ICH9 Module i2c-i801 loaded successfully. Module i2c-dev loaded successfully. Now follows a summary of the probes I have just done. Just press ENTER to continue: Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9) To load everything that is needed, add this to /etc/modules: #----cut here---- # Chip drivers coretemp #----cut here---- If you have some drivers built into your kernel, the list above will contain too many modules. Skip the appropriate ones! Do you want to add these lines automatically to /etc/modules? (yes/NO)y Successful! Monitoring programs won't work until the needed modules are loaded. You may want to run 'service module-init-tools start' to load them. Unloading i2c-dev... OK Unloading i2c-i801... OK Unloading cpuid... OK sean@sean-Studio-1737:~$ sudo service module-init-tools start module-init-tools stop/waiting I also tried installing i8k but that didn't work since it didn't seem to be able to communicate with the hardware (probably for different kind of device). Also I ran acpi -V and here are the results: Battery 0: Full, 100% Battery 0: design capacity 613 mAh, last full capacity 260 mAh = 42% Adapter 0: on-line Thermal 0: ok, 49.0 degrees C Thermal 0: trip point 0 switches to mode critical at temperature 100.0 degrees C Thermal 1: ok, 48.0 degrees C Thermal 1: trip point 0 switches to mode critical at temperature 100.0 degrees C Thermal 2: ok, 51.0 degrees C Thermal 2: trip point 0 switches to mode critical at temperature 100.0 degrees C Cooling 0: LCD 0 of 15 Cooling 1: Processor 0 of 10 Cooling 2: Processor 0 of 10 I have hit a wall and don't know what to do now. Any advice is appreciated.

    Read the article

  • How to install Awesome WM without root access?

    - by ssice
    I want to install the Awesome window manager. In the environment where I want to configure it I don't have root access. I do have a machine were I can be root (I use for this a virtual machine in my laptop). I have tried the following: $ sudo apt-get install awesome The following packages are about to be installed: awesome libev3 libid3tag0 libimlib2 liblua5.1-0 libxcb-icccm1 libxcb-image0 libxcb-keysyms1 libxcb-property1 libxcb-randr0 libxcb-xinerama0 libxcb-xtest0 libxdg-basedir1 menu rlwrap Do you want to continue [Y/n]? n I do now have the list of dependencies for awesome, so I downloaded them all. For that, I did the following. $ pkgs="awesome libev3 libid3tag0 libimlib2 liblua5.1-0 libxcb-icccm1 libxcb-image0 libxcb-keysyms1 libxcb-property1 libxcb-randr0 libxcb-xinerama0 libxcb-xtest0 libxdg-basedir1 menu rlwrap" # this is just for not writing it all ;) $ sudo apt-get install --download-only $pkgs .... $ mkdir -p /tmp/x_debs $ for pkg in $pkgs; do cp /var/cache/apt/archives/$pkg* /tmp/x_debs/; done [ copies all *.deb from my dependencies to /tmp/x_debs ] Now, I want to install the dependencies. For that, I setup a fake dpkg install in my home folder: $ mkdir $HOME/root $ mkdir -p $HOME/root/var/lib/dpkg/{triggers,updates} $ touch $HOME/root/var/lib/dpkg/{available,status} Now I tried to install with dpkg, but I could not: $ dpkg --force-not-root --root=$HOME/root --recursive -i /tmp/x_debs It failed while trying to set permissions for the packages and running chroot. As I do have root access in this machine, I ran it with privileges: $ sudo dpkg --root=$HOME/root --recursive -i /tmp/x_debs Then I had a lot of stuff (i.e., everything: dependencies and the own WM) installed inside $HOME/root. Particularly, xcb-* libraries were installed in $HOME/root/usr/lib and the awesome binary in $HOME/root/usr/bin/awesome. If I try to execute awesome as is I get as an error that libraries could not be loaded. That's normal, as they are not in /usr/lib nor in /lib. So I ran export LD_LIBRARY_PATH=$HOME/root/usr/lib:$HOME/root/lib:${LD_LIBRARY_PATH} and awesome would try to load. However, I could not make gdm to run awesome within gnome or replacing it. I did it this way so I can copy everything in my $HOME/root folder, paste it in the other machine and have it running. Is there any other way (to have less wasted space maybe..) to do this? How can I tell gdm to exec awesome without root access?

    Read the article

  • C# Performance Pitfall – Interop Scenarios Change the Rules

    - by Reed
    C# and .NET, overall, really do have fantastic performance in my opinion.  That being said, the performance characteristics dramatically differ from native programming, and take some relearning if you’re used to doing performance optimization in most other languages, especially C, C++, and similar.  However, there are times when revisiting tricks learned in native code play a critical role in performance optimization in C#. I recently ran across a nasty scenario that illustrated to me how dangerous following any fixed rules for optimization can be… The rules in C# when optimizing code are very different than C or C++.  Often, they’re exactly backwards.  For example, in C and C++, lifting a variable out of loops in order to avoid memory allocations often can have huge advantages.  If some function within a call graph is allocating memory dynamically, and that gets called in a loop, it can dramatically slow down a routine. This can be a tricky bottleneck to track down, even with a profiler.  Looking at the memory allocation graph is usually the key for spotting this routine, as it’s often “hidden” deep in call graph.  For example, while optimizing some of my scientific routines, I ran into a situation where I had a loop similar to: for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i]); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This loop was at a fairly high level in the call graph, and often could take many hours to complete, depending on the input data.  As such, any performance optimization we could achieve would be greatly appreciated by our users. After a fair bit of profiling, I noticed that a couple of function calls down the call graph (inside of ProcessElement), there was some code that effectively was doing: // Allocate some data required DataStructure* data = new DataStructure(num); // Call into a subroutine that passed around and manipulated this data highly CallSubroutine(data); // Read and use some values from here double values = data->Foo; // Cleanup delete data; // ... return bar; Normally, if “DataStructure” was a simple data type, I could just allocate it on the stack.  However, it’s constructor, internally, allocated it’s own memory using new, so this wouldn’t eliminate the problem.  In this case, however, I could change the call signatures to allow the pointer to the data structure to be passed into ProcessElement and through the call graph, allowing the inner routine to reuse the same “data” memory instead of allocating.  At the highest level, my code effectively changed to something like: DataStructure* data = new DataStructure(numberToProcess); for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i], data); } delete data; Granted, this dramatically reduced the maintainability of the code, so it wasn’t something I wanted to do unless there was a significant benefit.  In this case, after profiling the new version, I found that it increased the overall performance dramatically – my main test case went from 35 minutes runtime down to 21 minutes.  This was such a significant improvement, I felt it was worth the reduction in maintainability. In C and C++, it’s generally a good idea (for performance) to: Reduce the number of memory allocations as much as possible, Use fewer, larger memory allocations instead of many smaller ones, and Allocate as high up the call stack as possible, and reuse memory I’ve seen many people try to make similar optimizations in C# code.  For good or bad, this is typically not a good idea.  The garbage collector in .NET completely changes the rules here. In C#, reallocating memory in a loop is not always a bad idea.  In this scenario, for example, I may have been much better off leaving the original code alone.  The reason for this is the garbage collector.  The GC in .NET is incredibly effective, and leaving the allocation deep inside the call stack has some huge advantages.  First and foremost, it tends to make the code more maintainable – passing around object references tends to couple the methods together more than necessary, and overall increase the complexity of the code.  This is something that should be avoided unless there is a significant reason.  Second, (unlike C and C++) memory allocation of a single object in C# is normally cheap and fast.  Finally, and most critically, there is a large advantage to having short lived objects.  If you lift a variable out of the loop and reuse the memory, its much more likely that object will get promoted to Gen1 (or worse, Gen2).  This can cause expensive compaction operations to be required, and also lead to (at least temporary) memory fragmentation as well as more costly collections later. As such, I’ve found that it’s often (though not always) faster to leave memory allocations where you’d naturally place them – deep inside of the call graph, inside of the loops.  This causes the objects to stay very short lived, which in turn increases the efficiency of the garbage collector, and can dramatically improve the overall performance of the routine as a whole. In C#, I tend to: Keep variable declarations in the tightest scope possible Declare and allocate objects at usage While this tends to cause some of the same goals (reducing unnecessary allocations, etc), the goal here is a bit different – it’s about keeping the objects rooted for as little time as possible in order to (attempt) to keep them completely in Gen0, or worst case, Gen1.  It also has the huge advantage of keeping the code very maintainable – objects are used and “released” as soon as possible, which keeps the code very clean.  It does, however, often have the side effect of causing more allocations to occur, but keeping the objects rooted for a much shorter time. Now – nowhere here am I suggesting that these rules are hard, fast rules that are always true.  That being said, my time spent optimizing over the years encourages me to naturally write code that follows the above guidelines, then profile and adjust as necessary.  In my current project, however, I ran across one of those nasty little pitfalls that’s something to keep in mind – interop changes the rules. In this case, I was dealing with an API that, internally, used some COM objects.  In this case, these COM objects were leading to native allocations (most likely C++) occurring in a loop deep in my call graph.  Even though I was writing nice, clean managed code, the normal managed code rules for performance no longer apply.  After profiling to find the bottleneck in my code, I realized that my inner loop, a innocuous looking block of C# code, was effectively causing a set of native memory allocations in every iteration.  This required going back to a “native programming” mindset for optimization.  Lifting these variables and reusing them took a 1:10 routine down to 0:20 – again, a very worthwhile improvement. Overall, the lessons here are: Always profile if you suspect a performance problem – don’t assume any rule is correct, or any code is efficient just because it looks like it should be Remember to check memory allocations when profiling, not just CPU cycles Interop scenarios often cause managed code to act very differently than “normal” managed code. Native code can be hidden very cleverly inside of managed wrappers

    Read the article

  • 503.1 Service Unavailable Error Resolution

    - by Lee Brandt
    I was having a hell of a time tonight with my IIS on my development laptop. I don’t remember doing anything to change the IIS settings. I don’t use IIS that much on my dev machine. Usually Cassini is enough for testing my development efforts but tonight I needed to replicate a problem that seems to stem from x86 v x64 mismatch, so I went to create an IIS site pointed to my dev folder. When I did, I got a “503.1 Service Unavailable Error”. First thing I did is go over all my setting to make sure I didn’t screw something up when I set up the site. It was pointing to the right place, and the app pool settings seemed to be alright. However, when I got the 503.1 error and went back to my app pool list, I saw that the app pool I was using was stopped again. I must’ve started and ran it a dozen times to verify that I wasn’t seeing things. After having a colleague look at it and not finding an answer, I started poking around Google. I cam across a post from Phil Haack about the same error. His fix was not mine, however. When I ran his command on the CLI, I didn’t see the reserved routes for HTTP.SYS there. Finally, I looked in the event viewer (where I should have looked as soon as I saw that my app pool was stopping) and saw an error in there. For the IIS-W3SVC-WP Source I saw: The worker process for application pool 'DefaultAppPool' encountered an error 'Cannot read configuration file due to insufficient permissions ' trying to read configuration data from file '\\?\C:\Windows\Microsoft.NET\Framework64\v4.0.30319\CONFIG\machine.config', line number '0'. The data field contains the error code. So I went to that path and saw a little lock on the file icon. I opened up the security tab for file properties and saw that I was missing the IIS_IUSRS group. On a machine that was working correctly, I verified that it indeed had the IIS_IUSRS group set to Read and Read & Execute allowed. So I set mine up the same and voila! Hopefully this helps somebody else, too.

    Read the article

  • Do I need to replace my hard disk?

    - by Sneha Kamath
    Hello everyone. Whenever I start my computer Ubuntu pops up the following error: A hard disk may be failing one or more hard disk report health problems A friend of mine ran some test and it was found that my hard disk has 74 bad sectors. Is this merely a software issue that will be solved after a complete format of my hard disk, or is it a hardware issue and I will have to replace my hard disk? Awaiting your responses. Thanks, Sneha Kamath.

    Read the article

  • Would you expect this then ?

    - by GrumpyOldDBA
    I've just been working through a database looking for log tables which can have data aged out, or deleted if you like. I was using SSMS for this and used the default right click select top 1000 rows as a quick view of the data. Establishing that I had unwanted data in such a table I deleted based upon getdate()-30 , nearly 7,000 rows deleted, I then highlighted the top 1000 rows query and re-ran query, surpisingly I still got the same result set with 2006 dates in it, and 1,000 rows. A quick...(read more)

    Read the article

  • Installing Ubuntu 12.04 on NUC intel i3 DC3217IYE

    - by Kieron
    System: NUC i3, 2 hdmi ports, ethernet, no wireless. UEFI boot 2x2gb ram 30gb mSATA internal drive Ubuntu 64bit 12.04.03 Hello i am having much trouble loading an OS on my NUC. I started out attempting another OS with various loaders (beast/hack) without much success,(various panics on boot or endless reboot loops, or graphics failures) after many tries i decided to attempt Ubuntu. Many years ago i loaded Ubuntu on an e-machine without an issue so i figured it would go smoothly, nothing could be further from the truth. The Live USB stick loads, but when i am installing the OS it always fails to load grub 2. Obviously it wont boot from SSD without grub2. I searched and found that Boot-repair should fix it...so i created a live usb with boot repair...it refuses to repair the grub because the install never finished and the needed partions are not fully created and flagged. It also demands an internet connection which i am unable to provide. I then ran gparted in an attempt to create the needed partitions manually, then re-ran boot repair turning off the "check internet" option. and disabling the re-install grub hoping it would create the missing directories and or fix the flags. it appeared to run successfully but upon return to the Ubuntu live USB it still fails at the grub2 install. also gparted doesnt have the same choices that Ubuntu install has when creating partitions, causing it to not recognize that i already had a root, or an EFI directory or it sometimes couldnt tell what the format of the partition was...all very annoying the reason i cant connect to internet (and cant upload the error logs) is the nuc only has Ethernet and the location i have to set up is too far away from modem. i can not move the monitor closer to modem as it is a 50inch LCD. I just want to do a basic install with one user acct and remote desktop (vnc) turned on so i can move the NUC to the modem connect via ethernet and then finish setting it up via Remote desktop/VNC chicken from my mac. While i await any assistance you maybe able to provide i am going to attempt to switch to the 32bit version and legacy boot to see if that can load grub. thnx again to anyone that can come up with a possible solution. i would love to hit "erase and install ubuntu" if anyone can figure out what is stopping that simple answer from working. Also disks (CD/DVD) are not an option as neither my Mac mini or my NUC have optical drives, and i have no desire to buy one for one task

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >