Search Results

Search found 30146 results on 1206 pages for 'back to the future'.

Page 239/1206 | < Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >

  • Error with missing Wallpaper after remove unity from sudo command

    - by Vray
    After I paste these scripts into terminal, my wallpaper became white. sudo ln -s /etc/xdg/xdg-une/autostart/maximus-autostart.desktop /etc/xdg/autostart/ sudo ln -s /etc/xdg/xdg-une/autostart/netbook-launcher.desktop /etc/xdg/autostart/ sudo ln -s /usr/share/gconf/une/default/20_une-gconf-default /usr/share/gconf/defaults/ sudo ln -s /usr/share/gconf/une/mandatory/20_une-gconf-mandatory /usr/share/gconf/defaults/ sudo update-gconf-defaults Help me, I dont know what [-s] for ? and how do i get my wallpaper back. Best Regards from Newbie

    Read the article

  • Custom session: Window does not capture full screen area by default. 12.04

    - by juzerali
    I am trying to create a custom session by creating a custom.desktop file in /usr/share/xesessions folder. Remember this is not a gnome or some other session. I have created my own application for this session, which are simple. Case 1 Chrome Browser Contents of custom.desktop file [Desktop Entry] Name=Internet Kiosk Comment=This is an internet kiosk Exec=google-chrome --kiosk TryExec= Icon= Type=Application Issue Chrome browser starts in kiosk mode but does not capture complete screen area. Some area is left at the bottom and right side of the screen. Case 2 Custom pyGTK app (Quickly) Contents of custom.desktop file [Desktop Entry] Name=Custom Kiosk Comment=This is a custom kiosk Exec=~/MyCustomPyGTKApp TryExec= Icon= Type=Application Issue My custom pyGTK app has window.fullScreen() in the code. That means it should open in full screen without the window chrome (and it does under the normal session). But that too, leaves lots of space around it. Need Help Can anyone tell me whats going on here. I think its some issue with borders as pointed out at http://www.instructables.com/id/Setting-Up-Ubuntu-as-a-Kiosk-Web-Appliance/?ALLSTEPS in Step 8 If by chance, Google Chromium is not stretched to the edges with the --kiosk switch enabled there is a simple fix. To stretch Chromium simply log in as your regular user and edit chromeKiosk.sh to not have the --kiosk switch. Then log in as the restricted user, click the wrench and choose options. Then on the Personal Stuff tab select Hide system title bar and use compact borders. Close the options screen and stretch Chromium to fit the monitor. Then go back into the options window and set it to Use system title bar and borders. After this is done, log out of your restricted user (might need to just reboot) and log into your regular user. Edit chromeKiosk.sh back to include the --kiosk switch again and Chromium should be full screen next time you log into the restricted user. If I were to use a custom pyGTK or a gtkmm app, how should I get around this issue. window.fullScreen() should occupy the complete screen area. This has to be done programmatically or in some other way that can scale. I have to deploy this on large number of machines located at different geographical areas. Doing it manually on every machine is not possible.

    Read the article

  • T-SQL Tuesday #31 - Logging Tricks with CONTEXT_INFO

    - by Most Valuable Yak (Rob Volk)
    This month's T-SQL Tuesday is being hosted by Aaron Nelson [b | t], fellow Atlantan (the city in Georgia, not the famous sunken city, or the resort in the Bahamas) and covers the topic of logging (the recording of information, not the harvesting of trees) and maintains the fine T-SQL Tuesday tradition begun by Adam Machanic [b | t] (the SQL Server guru, not the guy who fixes cars, check the spelling again, there will be a quiz later). This is a trick I learned from Fernando Guerrero [b | t] waaaaaay back during the PASS Summit 2004 in sunny, hurricane-infested Orlando, during his session on Secret SQL Server (not sure if that's the correct title, and I haven't used parentheses in this paragraph yet).  CONTEXT_INFO is a neat little feature that's existed since SQL Server 2000 and perhaps even earlier.  It lets you assign data to the current session/connection, and maintains that data until you disconnect or change it.  In addition to the CONTEXT_INFO() function, you can also query the context_info column in sys.dm_exec_sessions, or even sysprocesses if you're still running SQL Server 2000, if you need to see it for another session. While you're limited to 128 bytes, one big advantage that CONTEXT_INFO has is that it's independent of any transactions.  If you've ever logged to a table in a transaction and then lost messages when it rolled back, you can understand how aggravating it can be.  CONTEXT_INFO also survives across multiple SQL batches (GO separators) in the same connection, so for those of you who were going to suggest "just log to a table variable, they don't get rolled back":  HA-HA, I GOT YOU!  Since GO starts a new batch all variable declarations are lost. Here's a simple example I recently used at work.  I had to test database mirroring configurations for disaster recovery scenarios and measure the network throughput.  I also needed to log how long it took for the script to run and include the mirror settings for the database in question.  I decided to use AdventureWorks as my database model, and Adam Machanic's Big Adventure script to provide a fairly large workload that's repeatable and easily scalable.  My test would consist of several copies of AdventureWorks running the Big Adventure script while I mirrored the databases (or not). Since Adam's script contains several batches, I decided CONTEXT_INFO would have to be used.  As it turns out, I only needed to grab the start time at the beginning, I could get the rest of the data at the end of the process.   The code is pretty small: declare @time binary(128)=cast(getdate() as binary(8)) set context_info @time   ... rest of Big Adventure code ...   go use master; insert mirror_test(server,role,partner,db,state,safety,start,duration) select @@servername, mirroring_role_desc, mirroring_partner_instance, db_name(database_id), mirroring_state_desc, mirroring_safety_level_desc, cast(cast(context_info() as binary(8)) as datetime), datediff(s,cast(cast(context_info() as binary(8)) as datetime),getdate()) from sys.database_mirroring where db_name(database_id) like 'Adv%';   I declared @time as a binary(128) since CONTEXT_INFO is defined that way.  I couldn't convert GETDATE() to binary(128) as it would pad the first 120 bytes as 0x00.  To keep the CAST functions simple and avoid using SUBSTRING, I decided to CAST GETDATE() as binary(8) and let SQL Server do the implicit conversion.  It's not the safest way perhaps, but it works on my machine. :) As I mentioned earlier, you can query system views for sessions and get their CONTEXT_INFO.  With a little boilerplate code this can be used to monitor long-running procedures, in case you need to kill a process, or are just curious  how long certain parts take.  In this example, I added code to Adam's Big Adventure script to set CONTEXT_INFO messages at strategic places I want to monitor.  (His code is in UPPERCASE as it was in the original, mine is all lowercase): declare @msg binary(128) set @msg=cast('Altering bigProduct.ProductID' as binary(128)) set context_info @msg go ALTER TABLE bigProduct ALTER COLUMN ProductID INT NOT NULL GO set context_info 0x0 go declare @msg1 binary(128) set @msg1=cast('Adding pk_bigProduct Constraint' as binary(128)) set context_info @msg1 go ALTER TABLE bigProduct ADD CONSTRAINT pk_bigProduct PRIMARY KEY (ProductID) GO set context_info 0x0 go declare @msg2 binary(128) set @msg2=cast('Altering bigTransactionHistory.TransactionID' as binary(128)) set context_info @msg2 go ALTER TABLE bigTransactionHistory ALTER COLUMN TransactionID INT NOT NULL GO set context_info 0x0 go declare @msg3 binary(128) set @msg3=cast('Adding pk_bigTransactionHistory Constraint' as binary(128)) set context_info @msg3 go ALTER TABLE bigTransactionHistory ADD CONSTRAINT pk_bigTransactionHistory PRIMARY KEY NONCLUSTERED(TransactionID) GO set context_info 0x0 go declare @msg4 binary(128) set @msg4=cast('Creating IX_ProductId_TransactionDate Index' as binary(128)) set context_info @msg4 go CREATE NONCLUSTERED INDEX IX_ProductId_TransactionDate ON bigTransactionHistory(ProductId,TransactionDate) INCLUDE(Quantity,ActualCost) GO set context_info 0x0   This doesn't include the entire script, only those portions that altered a table or created an index.  One annoyance is that SET CONTEXT_INFO requires a literal or variable, you can't use an expression.  And since GO starts a new batch I need to declare a variable in each one.  And of course I have to use CAST because it won't implicitly convert varchar to binary.  And even though context_info is a nullable column, you can't SET CONTEXT_INFO NULL, so I have to use SET CONTEXT_INFO 0x0 to clear the message after the statement completes.  And if you're thinking of turning this into a UDF, you can't, although a stored procedure would work. So what does all this aggravation get you?  As the code runs, if I want to see which stage the session is at, I can run the following (assuming SPID 51 is the one I want): select CAST(context_info as varchar(128)) from sys.dm_exec_sessions where session_id=51   Since SQL Server 2005 introduced the new system and dynamic management views (DMVs) there's not as much need for tagging a session with these kinds of messages.  You can get the session start time and currently executing statement from them, and neatly presented if you use Adam's sp_whoisactive utility (and you absolutely should be using it).  Of course you can always use xp_cmdshell, a CLR function, or some other tricks to log information outside of a SQL transaction.  All the same, I've used this trick to monitor long-running reports at a previous job, and I still think CONTEXT_INFO is a great feature, especially if you're still using SQL Server 2000 or want to supplement your instrumentation.  If you'd like an exercise, consider adding the system time to the messages in the last example, and an automated job to query and parse it from the system tables.  That would let you track how long each statement ran without having to run Profiler. #TSQL2sDay

    Read the article

  • Ubuntu 12.10 stucks in a Login Loop

    - by Calvin Wahlers
    My problem: As you can guess my Ubuntu 12.10 stucks in a login loop when trying to enter my desktop. Means the screen gets black and soon after that the login screen comes back. I'm a Ubuntu Newbie so if there's any answer please explain in a simply understandable language :) I've already read that the problem might be caused by an error depending on the graphics, so I post my graphics to: My graphics: ATI Radeon 7670M Hope you can help me, thank you ;)

    Read the article

  • Migrate Your Site to TYPO3 CMS

    What else can you ask from a CMS system that gives you complete freedom from maintaining your site, coding, frequent changes, low cost of ownership, self administered back end and a lot more. This list can continue and go on and on. TYPO3 CMS has got many more advantages from the contemporary CMS systems. It is not only affable to the development community but also liked by most of the users who comes across this CMS.

    Read the article

  • The Birth of a Method - Where did OUM come from?

    - by user702549
    It seemed fitting to start this blog entry with the OUM vision statement. The vision for the Oracle® Unified Method (OUM) is to support the entire Enterprise IT lifecycle, including support for the successful implementation of every Oracle product.  Well, it’s that time of year again; we just finished testing and packaging OUM 5.6.  It will be released for general availability to qualifying customers and partners this month.  Because of this, I’ve been reflecting back on how the birth of Oracle’s Unified method - OUM came about. As the Release Director of OUM, I’ve been honored to package every method release.  No, maybe you’d say it’s not so special.  Of course, anyone can use packaging software to create an .exe file.  But to me, it is pretty special, because so many people work together to make each release come about.  The rich content that results is what makes OUM’s history worth talking about.   To me, professionally speaking, working on OUM, well it’s been “a labor of love”.  My youngest child was just 8 years old when OUM was born, and she’s now in High School!  Watching her grow and change has been fascinating, if you ask her, she’s grown up hearing about OUM.  My son would often walk into my home office and ask “How is OUM today, Mom?”  I am one of many people that take care of OUM, and have watched the method “mature” over these last 6 years.  Maybe that makes me a "Method Mom" (someone in one of my classes last year actually said this outloud) but there are so many others who collaborate and care about OUM Development. I’ve thought about writing this blog entry for a long time just to reflect on how far the Method has come. Each release, as I prepare the OUM Contributors list, I see how many people’s experience and ideas it has taken to create this wealth of knowledge, process and task guidance as well as templates and examples.  If you’re wondering how many people, just go into OUM select the resources button on the top of most pages of the method, and on that resources page click the ABOUT link. So now back to my nostalgic moment as I finished release 5.6 packaging.  I reflected back, on all the things that happened that cause OUM to become not just a dream but to actually come to fruition.  Here are some key conditions that make it possible for each release of the method: A vision to have one method instead of many methods, thereby focusing on deeper, richer content People within Oracle’s consulting Organization  willing to contribute to OUM providing Subject Matter Experts who are willing to write down and share what they know. Oracle’s continued acquisition of software companies, the need to assimilate high quality existing materials from these companies The need to bring together people from very different backgrounds and provide a common language to support Oracle Product implementations that often involve multiple product families What came first, and then what was the strategy? Initially OUM 4.0 was based on Oracle’s J2EE Custom Development Method (JCDM), it was a good “backbone”  (work breakdown structure) it was Unified Process based, and had good content around UML as well as custom software development.  But it needed to be extended in order to achieve the OUM Vision. What happened after that was to take in the “best of the best”, the legacy and acquired methods were scheduled for assimilation into OUM, one release after another.  We incrementally built OUM.  We didn’t want to lose any of the expertise that was reflected in AIM (Oracle’s legacy Application Implementation Method), Compass (People Soft’s Application implementation method) and so many more. When was OUM born? OUM 4.1 published April 30, 2006.  This release allowed Oracles Advanced Technology groups to begin the very first implementations of Fusion Middleware.  In the early days of the Method we would prepare several releases a year.  Our iterative release development cycle began and continues to be refined with each Method release.  Now we typically see one major release each year. The OUM release development cycle is not unlike many Oracle Implementation projects in that we need to gather requirements, prioritize, prepare the content, test package and then go production.  Typically we develop an OUM release MoSCoW (must have, should have, could have, and won’t have) right after the prior release goes out.   These are the high level requirements.  We break the timeframe into increments, frequent checkpoints that help us assess the content and progress is measured through frequent checkpoints.  We work as a team to prioritize what should be done in each increment. Yes, the team provides the estimates for what can be done within a particular increment.  We sometimes have Method Development workshops (physically or virtually) to accelerate content development on a particular subject area, that is where the best content results. As the written content nears the final stages, it goes through edit and evaluation through peer reviews, and then moves into the release staging environment.  Then content freeze and testing of the method pack take place.  This iterative cycle is run using the OUM artifacts that make sense “fit for purpose”, project plans, MoSCoW lists, Test plans are just a few of the OUM work products we use on a Method Release project. In 2007 OUM 4.3, 4.4 and 4.5 were published.  With the release of 4.5 our Custom BI Method (Data Warehouse Method FastTrack) was assimilated into OUM.  These early releases helped us align Oracle’s Unified method with other industry standards Then in 2008 we made significant changes to the OUM “Backbone” to support Applications Implementation projects with that went to the OUM 5.0 release.  Now things started to get really interesting.  Next we had some major developments in the Envision focus area in the area of Enterprise Architecture.  We acquired some really great content from the former BEA, Liquid Enterprise Method (LEM) along with some SMEs who were willing to work at bringing this content into OUM.  The Service Oriented Architecture content in OUM is extensive and can help support the successful implementation of Fusion Middleware, as well as Fusion Applications. Of course we’ve developed a wealth of OUM training materials that work also helps to improve the method content.  It is one thing to write “how to”, and quite another to be able to teach people how to use the materials to improve the success of their projects.  I’ve learned so much by teaching people how to use OUM. What's next? So here toward the end of 2012, what’s in store in OUM 5.6, well, I’m sure you won’t be surprised the answer is Cloud Computing.   More details to come in the next couple of weeks!  The best part of being involved in the development of OUM is to see how many people have “adopted” OUM over these six years, Clients, Partners, and Oracle Consultants.  The content just gets better with each release.   I’d love to hear your comments on how OUM has evolved, and ideas for new content you’d like to see in the upcoming releases.

    Read the article

  • SQLPASS Summit 2011 -- I'm going but not as a speaker

    - by NeilHambly
    This post is about my attempt and slight failure @ getting a presenting session @ this year’s SQLPASS Summit 2011 I had submitted for the 1st time 2 submissions (think we had max of 4 we could enter, but I was happy to go with just 2 this time, 1 I had already presented & 1 was nearly completed) My general session (75 minutes) the same session on “Waits” I had done @ SQLBits 8 back in Brighton last April, and a new 1/2 day 3.5 hours format which is a session I’m completing on SQLOS layer Well...(read more)

    Read the article

  • Alchemy like game for the web, open source. Any ideas for element combinations?

    - by JohnDel
    I created a web game like the Android game Alchemy. It's open source and in the back-end you can create your own elements / your own game. I was wondering what elements - ideas would be good to implement as a prototype / demo? Some ideas are: Colors Programming languages Chemical Compounds Same as the original alchemy Evolution of biological organisms What do you think? Any specific combination ideas?

    Read the article

  • DNN World 2011

    - by bdukes
    We’re on the plane flying back to St. Louis from DNN World 2011 .  I gave a presentation titled DNN 6 UI/UX Patterns , discussing the form patterns introduced in the administrative modules in DNN 6 (the new look and feel that you immediately noticed after logging into your new DNN 6 site).  Many folks asked about seeing the examples that I presented, and they are available as a repository on github, at https://github.com/bdukes/DNN-World-Demos .  This includes a series of small, one...(read more)

    Read the article

  • I cannot save php.ini in ubuntu 12.04

    - by Ashok KS
    I want to change the upload_max_filesize = 2M to 50M, then I started edit on php.ini, but when try to save it, it displays error message below Could not create a backup file while saving /etc/php5/apache2/php.ini gedit could not back up the old copy of the file before saving the new one. You can ignore this warning and save the file anyway, but if an error occurs while saving, you could lose the old copy of the file. Save anyway?

    Read the article

  • Oracle Flashback Technologies - Overview

    - by Sridhar_R-Oracle
    Oracle Flashback Technologies - IntroductionIn his May 29th 2014 blog, my colleague Joe Meeks introduced Oracle Maximum Availability Architecture (MAA) and discussed both planned and unplanned outages. Let’s take a closer look at unplanned outages. These can be caused by physical failures (e.g., server, storage, network, file deletion, physical corruption, site failures) or by logical failures – cases where all components and files are physically available, but data is incorrect or corrupt. These logical failures are usually caused by human errors or application logic errors. This blog series focuses on these logical errors – what causes them and how to address and recover from them using Oracle Database Flashback. In this introductory blog post, I’ll provide an overview of the Oracle Database Flashback technologies and will discuss the features in detail in future blog posts. Let’s get started. We are all human beings (unless a machine is reading this), and making mistakes is a part of what we do…often what we do best!  We “fat finger”, we spill drinks on keyboards, unplug the wrong cables, etc.  In addition, many of us, in our lives as DBAs or developers, must have observed, caused, or corrected one or more of the following unpleasant events: Accidentally updated a table with wrong values !! Performed a batch update that went wrong - due to logical errors in the code !! Dropped a table !! How do DBAs typically recover from these types of errors? First, data needs to be restored and recovered to the point-in-time when the error occurred (incomplete or point-in-time recovery).  Moreover, depending on the type of fault, it’s possible that some services – or even the entire database – would have to be taken down during the recovery process.Apart from error conditions, there are other questions that need to be addressed as part of the investigation. For example, what did the data look like in the morning, prior to the error? What were the various changes to the row(s) between two timestamps? Who performed the transaction and how can it be reversed?  Oracle Database includes built-in Flashback technologies, with features that address these challenges and questions, and enable you to perform faster, easier, and convenient recovery from logical corruptions. HistoryFlashback Query, the first Flashback Technology, was introduced in Oracle 9i. It provides a simple, powerful and completely non-disruptive mechanism for data verification and recovery from logical errors, and enables users to view the state of data at a previous point in time.Flashback Technologies were further enhanced in Oracle 10g, to provide fast, easy recovery at the database, table, row, and even at a transaction level.Oracle Database 11g introduced an innovative method to manage and query long-term historical data with Flashback Data Archive. The 11g release also introduced Flashback Transaction, which provides an easy, one-step operation to back out a transaction. Oracle Database versions 11.2.0.2 and beyond further enhanced the performance of these features. Note that all the features listed here work without requiring any kind of restore operation.In addition, Flashback features are fully supported with the new multi-tenant capabilities introduced with Oracle Database 12c, Flashback Features Oracle Flashback Database enables point-in-time-recovery of the entire database without requiring a traditional restore and recovery operation. It rewinds the entire database to a specified point in time in the past by undoing all the changes that were made since that time.Oracle Flashback Table enables an entire table or a set of tables to be recovered to a point in time in the past.Oracle Flashback Drop enables accidentally dropped tables and all dependent objects to be restored.Oracle Flashback Query enables data to be viewed at a point-in-time in the past. This feature can be used to view and reconstruct data that was lost due to unintentional change(s) or deletion(s). This feature can also be used to build self-service error correction into applications, empowering end-users to undo and correct their errors.Oracle Flashback Version Query offers the ability to query the historical changes to data between two points in time or system change numbers (SCN) Oracle Flashback Transaction Query enables changes to be examined at the transaction level. This capability can be used to diagnose problems, perform analysis, audit transactions, and even revert the transaction by undoing SQLOracle Flashback Transaction is a procedure used to back-out a transaction and its dependent transactions.Flashback technologies eliminate the need for a traditional restore and recovery process to fix logical corruptions or make enquiries. Using these technologies, you can recover from the error in the same amount of time it took to generate the error. All the Flashback features can be accessed either via SQL command line (or) via Enterprise Manager.  Most of the Flashback technologies depend on the available UNDO to retrieve older data. The following table describes the various Flashback technologies: their purpose, dependencies and situations where each individual technology can be used.   Example Syntax Error investigation related:The purpose is to investigate what went wrong and what the values were at certain points in timeFlashback Queries  ( select .. as of SCN | Timestamp )   - Helps to see the value of a row/set of rows at a point in timeFlashback Version Queries  ( select .. versions between SCN | Timestamp and SCN | Timestamp)  - Helps determine how the value evolved between certain SCNs or between timestamps Flashback Transaction Queries (select .. XID=)   - Helps to understand how the transaction caused the changes.Error correction related:The purpose is to fix the error and correct the problems,Flashback Table  (flashback table .. to SCN | Timestamp)  - To rewind the table to a particular timestamp or SCN to reverse unwanted updates Flashback Drop (flashback table ..  to before drop )  - To undrop or undelete a table Flashback Database (flashback database to SCN  | Restore Point )  - This is the rewind button for Oracle databases. You can revert the entire database to a particular point in time. It is a fast way to perform a PITR (point-in-time recovery). Flashback Transaction (DBMS_FLASHBACK.TRANSACTION_BACKOUT(XID..))  - To reverse a transaction and its related transactions Advanced use cases Flashback technology is integrated into Oracle Recovery Manager (RMAN) and Oracle Data Guard. So, apart from the basic use cases mentioned above, the following use cases are addressed using Oracle Flashback. Block Media recovery by RMAN - to perform block level recovery Snapshot Standby - where the standby is temporarily converted to a read/write environment for testing, backup, or migration purposes Re-instate old primary in a Data Guard environment – this avoids the need to restore an old backup and perform a recovery to make it a new standby. Guaranteed Restore Points - to bring back the entire database to an older point-in-time in a guaranteed way. and so on..I hope this introductory overview helps you understand how Flashback features can be used to investigate and recover from logical errors.  As mentioned earlier, I will take a deeper-dive into to some of the critical Flashback features in my upcoming blogs and address common use cases.

    Read the article

  • The SSIS tuning tip that everyone misses

    - by Rob Farley
    I know that everyone misses this, because I’m yet to find someone who doesn’t have a bit of an epiphany when I describe this. When tuning Data Flows in SQL Server Integration Services, people see the Data Flow as moving from the Source to the Destination, passing through a number of transformations. What people don’t consider is the Source, getting the data out of a database. Remember, the source of data for your Data Flow is not your Source Component. It’s wherever the data is, within your database, probably on a disk somewhere. You need to tune your query to optimise it for SSIS, and this is what most people fail to do. I’m not suggesting that people don’t tune their queries – there’s plenty of information out there about making sure that your queries run as fast as possible. But for SSIS, it’s not about how fast your query runs. Let me say that again, but in bolder text: The speed of an SSIS Source is not about how fast your query runs. If your query is used in a Source component for SSIS, the thing that matters is how fast it starts returning data. In particular, those first 10,000 rows to populate that first buffer, ready to pass down the rest of the transformations on its way to the Destination. Let’s look at a very simple query as an example, using the AdventureWorks database: We’re picking the different Weight values out of the Product table, and it’s doing this by scanning the table and doing a Sort. It’s a Distinct Sort, which means that the duplicates are discarded. It'll be no surprise to see that the data produced is sorted. Obvious, I know, but I'm making a comparison to what I'll do later. Before I explain the problem here, let me jump back into the SSIS world... If you’ve investigated how to tune an SSIS flow, then you’ll know that some SSIS Data Flow Transformations are known to be Blocking, some are Partially Blocking, and some are simply Row transformations. Take the SSIS Sort transformation, for example. I’m using a larger data set for this, because my small list of Weights won’t demonstrate it well enough. Seven buffers of data came out of the source, but none of them could be pushed past the Sort operator, just in case the last buffer contained the data that would be sorted into the first buffer. This is a blocking operation. Back in the land of T-SQL, we consider our Distinct Sort operator. It’s also blocking. It won’t let data through until it’s seen all of it. If you weren’t okay with blocking operations in SSIS, why would you be happy with them in an execution plan? The source of your data is not your OLE DB Source. Remember this. The source of your data is the NCIX/CIX/Heap from which it’s being pulled. Picture it like this... the data flowing from the Clustered Index, through the Distinct Sort operator, into the SELECT operator, where a series of SSIS Buffers are populated, flowing (as they get full) down through the SSIS transformations. Alright, I know that I’m taking some liberties here, because the two queries aren’t the same, but consider the visual. The data is flowing from your disk and through your execution plan before it reaches SSIS, so you could easily find that a blocking operation in your plan is just as painful as a blocking operation in your SSIS Data Flow. Luckily, T-SQL gives us a brilliant query hint to help avoid this. OPTION (FAST 10000) This hint means that it will choose a query which will optimise for the first 10,000 rows – the default SSIS buffer size. And the effect can be quite significant. First let’s consider a simple example, then we’ll look at a larger one. Consider our weights. We don’t have 10,000, so I’m going to use OPTION (FAST 1) instead. You’ll notice that the query is more expensive, using a Flow Distinct operator instead of the Distinct Sort. This operator is consuming 84% of the query, instead of the 59% we saw from the Distinct Sort. But the first row could be returned quicker – a Flow Distinct operator is non-blocking. The data here isn’t sorted, of course. It’s in the same order that it came out of the index, just with duplicates removed. As soon as a Flow Distinct sees a value that it hasn’t come across before, it pushes it out to the operator on its left. It still has to maintain the list of what it’s seen so far, but by handling it one row at a time, it can push rows through quicker. Overall, it’s a lot more work than the Distinct Sort, but if the priority is the first few rows, then perhaps that’s exactly what we want. The Query Optimizer seems to do this by optimising the query as if there were only one row coming through: This 1 row estimation is caused by the Query Optimizer imagining the SELECT operation saying “Give me one row” first, and this message being passed all the way along. The request might not make it all the way back to the source, but in my simple example, it does. I hope this simple example has helped you understand the significance of the blocking operator. Now I’m going to show you an example on a much larger data set. This data was fetching about 780,000 rows, and these are the Estimated Plans. The data needed to be Sorted, to support further SSIS operations that needed that. First, without the hint. ...and now with OPTION (FAST 10000): A very different plan, I’m sure you’ll agree. In case you’re curious, those arrows in the top one are 780,000 rows in size. In the second, they’re estimated to be 10,000, although the Actual figures end up being 780,000. The top one definitely runs faster. It finished several times faster than the second one. With the amount of data being considered, these numbers were in minutes. Look at the second one – it’s doing Nested Loops, across 780,000 rows! That’s not generally recommended at all. That’s “Go and make yourself a coffee” time. In this case, it was about six or seven minutes. The faster one finished in about a minute. But in SSIS-land, things are different. The particular data flow that was consuming this data was significant. It was being pumped into a Script Component to process each row based on previous rows, creating about a dozen different flows. The data flow would take roughly ten minutes to run – ten minutes from when the data first appeared. The query that completes faster – chosen by the Query Optimizer with no hints, based on accurate statistics (rather than pretending the numbers are smaller) – would take a minute to start getting the data into SSIS, at which point the ten-minute flow would start, taking eleven minutes to complete. The query that took longer – chosen by the Query Optimizer pretending it only wanted the first 10,000 rows – would take only ten seconds to fill the first buffer. Despite the fact that it might have taken the database another six or seven minutes to get the data out, SSIS didn’t care. Every time it wanted the next buffer of data, it was already available, and the whole process finished in about ten minutes and ten seconds. When debugging SSIS, you run the package, and sit there waiting to see the Debug information start appearing. You look for the numbers on the data flow, and seeing operators going Yellow and Green. Without the hint, I’d sit there for a minute. With the hint, just ten seconds. You can imagine which one I preferred. By adding this hint, it felt like a magic wand had been waved across the query, to make it run several times faster. It wasn’t the case at all – but it felt like it to SSIS.

    Read the article

  • Boot Problem in Asus EEE PC 1015CX

    - by Sâmrat VikrãmAdityá
    I am a newbie to Linux world, although I have previously worked on Ubuntu 11.04 for daily use (Net Access and simple recordings using Audacity). I am not sure, at what level I stand as a newbie. I bought this Asus Eee PC two days back. The model is Asus 1015CX. See the specs here http://www.flipkart.com/asus-1015cx-blk011w-laptop-2nd-gen-atom-dual-core-1gb-320gb-linux/p/itmd8qu4quzu8srr . I created a live USB to install 12.10. The usb booted fine. When I clicked "Try Ubuntu" option, it showed me a black screen with a cursor blinking. I waited for 15 minutes and had to restart using the power button. On clicking the "Install Ubuntu" button, the install process went seamlessly. [I have a Windows7 installed on one of the partitions]. i installed it alongside previous windows installation. The system was then rebooted for the first time. It showed the GRUB menu and I selected the first option Ubuntu. After showing the splash screen for a second, it began showing various messages on a black screen and then it struck on "Stopping Save kernel state message". I had to force shut the system using power button. Sometimes it just gives a blank screen with a cursor blinking and on pressing power button, some messages stating that acpid is doing something and stopping services pops up and the system shuts down. I tried booting with "nomodeset" and other parameters as directed in solution to previous such problems on forums. Also Ctrl+Alt+F1,F2,F3,F4,F5,F6..F12 is not doing anything for me anywhere. At installation, I checked Login automatically option. On booting into recovery several options comes up. Clicking resume just gives me a blank screen with cursor blinking. on dropping to root shell and remounting filesystem as RW, I am able to supply some command that worked for others. startx -- Several messages comes up with last one stating Fatal error: No screen found sudo service lightdm start -- Gives a blank screen with a cursor blinking lspci | grep VGA -- Shows some Intel Integrated Graphic... something I don't remember I had reconfigured xserver-xorg, lightdm, reinstalled ubuntu-desktop, unity. What should I do..?? Will going back to 11.04 work..?? Or I should leave all hopes of running Ubuntu on my netbook. Please help.

    Read the article

  • How do I fix my theme?

    - by SepiDev
    After installing and using Ubuntu 11.10 for while I decided to install gnome shell. After Installing and rebooting my system I saw that Ubuntu light theme (ambiance/radiance) is gone. I check /usr/shar/themes and /usr/shar/icons, it seems that the light themes and mono icons exist. I even reinstalled gtk3-engine-unico package but none of these effort fixed my problem :( my desktop now looks like this: What should I do to get my default ubuntu theme back?

    Read the article

  • vi issue in SSH TTYs to Ubuntu 14.04.1 LTS

    - by Steve Campbell
    After upgrading my server to Ubuntu 14.04.1 LTS, I can no longer use the vi editor to edit anything in an SSH terminal (I access the server by launching ssh sessions from Cygwin running on Windows). The empty portions of the vi window fill with garbage. The workaround is to launch an xterm from the server back to my Cygwin/X display. Using vi from within the xterm works fine. Setting my TERM to vt100/vt220/xterm does not help.

    Read the article

  • Running 12.04 on a Dell Inspiron 1545 having random system lockups

    - by Kris
    I just reinstalled a fresh 12.04 because I couldn't even get booted on my previous installation anymore. I just installed 8GB of ram on this laptop, but the problem happened even before this, and all I have installed in this run on Ubuntu is: ia32libs, Oracle JDK7 and the 32bit JRE junipernc jupiter The laptop has never been used very heavily, so could someone get back to me on this, let me know what further information you need about my machine, or files to upload. Thanks!

    Read the article

  • Why does Chrome video performance substantially degrade after waking from suspend in 10.10?

    - by Grant Heaslip
    Note: For some more details, some of which may not be true given what I've figured out, see this post. When I first boot my computer, video performance (both native H.264 HTML5 in YouTube and Vimeo, and in Flash) in Chrome is perfectly reasonable. CPU usage stays slow, everything works correctly, and the video is silky-smooth. But for whatever reason, if I suspend my computer then wake it up, video performance plummets. Full screen HTML5 video is choppy at best, and full-screen Flash video basically brings my computer to its knees (I'm talking less than a frame a second, and a 5 second lead time to leave full-screen after hitting Esc). Restarting Chrome doesn't fix this — I need to completely restart my machine before performance goes back to normal. Video performance in other applications, such as Movie Player, doesn't seem to be affected at all by the suspend cycle — it's only Chrome. I'm using a Lenovo X201, with an Intel GMA HD graphics chipset, and Intel compnents all around (I don't need any proprietary drivers). This didn't happen in 10.04, and I haven't anything that I think would have caused this to happen. It's possible that a Chrome release could have caused this, but it seems less likely than a regression between 10.04 and 10.10. Any ideas? EDIT: In response Georg's comment, logging in and out doesn't fix it. Restarting Compiz or switching to Metacity (at least by using "compiz/metacity --replace & disown" — am I doing it right?) doesn't help (actually, it seemed to help somewhat with Flash once, but I haven't been able to reproduce this). I'm not sure about GDM — when I use "sudo restart gdm" I get kicked back to the Linux shell (?), which I have no idea how to get out of. Also, I want to make very clear that this isn't just a case of Flash sucking (it does,but that's beside the point). I"m seeing the same general problem with HTML5 videos, and Flash is performing better on my Nexus One than it does on my Core i5 laptop. There's something screwy going on with Chrome and/or 10.10.

    Read the article

  • Web Designer and/or Developer

    - by chimps
    we've outsourced our app development, the dev's have created a DB hosted on Amazon-EC2. we're in talks with a web designer for website but the designer does not do any backend integration. i.e connect the website with DB created by app developers do you recommend getting designs from the designer and getting a freelancer to do the front-back end integration, I mean would there be issues/complications? Or go with designer who provides the complete package?

    Read the article

  • are there any opensource VGA drivers

    - by blade19899
    are there any open-source VGA drivers i seem to remember that back when i was using 10.10 or an older one that people where writing some opensource vga drivers for linux but i can't seem to find a webpage for that project? i was wondering cause i like the idea and want to install it and report bug fixes cause ubuntu with opensource VGA drivers somehow I find the idea to be fun and who knows maybe they will be even better/stable than the proprietary versions

    Read the article

  • Cannot connect to the Internet

    - by Pratap
    My desktop has Broadband wired connection - shows connected, but unable to load any web page in Ubuntu 12.04 but works fine with Windows 7. My laptop - wireless connection shows connected, but again does not load any web page. Please help. On running sudo dhclient eth1 on my laptop there is no result - see the screen output:prajna@LAPTOP:~$ sudo dhclient eth1 [sudo] password for prajna: prajna@LAPTOP:~$ sudo dhclient eth1 [sudo] password for prajna: after giving password - some time later the screen comes back to prajna@LAPTOP:~$

    Read the article

  • How In-Memory Database Objects Affect Database Design: The Conceptual Model

    - by drsql
    After a rather long break in the action to get through some heavy tech editing work (paid work before blogging, I always say!) it is time to start working on this presentation about In-Memory Databases. I have been trying to decide on the scope of the demo code in the back of my head, and I have added more and taken away bits and pieces over time trying to find the balance of "enough" complexity to show data integrity issues and joins, but not so much that we get lost in the process of trying to...(read more)

    Read the article

  • Taking 10 minutes to boot up!

    - by oshirowanen
    Just added 2 pci-e to ide cards in my computer so I can use 2 old ide hard drives. Everything works fine, except the OS booting time has gone from about 10 seconds to about 10 minutes... I'f I remove both cards, it takes about 10 seconds to boot up, if I add either 1 of the cards back in, it still takes 10 seconds to boot up, but as soon as I have both cards in, it takes about 10 minutes. Why would this be happening?

    Read the article

< Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >