Search Results

Search found 7257 results on 291 pages for 'eager loading'.

Page 102/291 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • CodePlex Daily Summary for Sunday, November 03, 2013

    CodePlex Daily Summary for Sunday, November 03, 2013Popular ReleasesuComponents: uComponents v6.0.0: This release of uComponents will compile against and support the new API in Umbraco v6.1.0. What's new in uComponents v6.0.0? New DataTypesImage Point XML DropDownList XPath Templatable List New features / Resolved issuesThe following workitems have been implemented and/or resolved: 14781 14805 14808 14818 14854 14827 14868 14859 14790 14853 14790 DataType Grid 14788 14810 14873 14833 14864 14855 / 14860 14816 14823 Drag & Drop support for rows Su...SmartStore.NET - Free ASP.NET MVC Ecommerce Shopping Cart Solution: SmartStore.NET 1.2.1: New FeaturesAdded option Limit to current basket subtotal to HadSpentAmount discount rule Items in product lists can be labelled as NEW for a configurable period of time Product templates can optionally display a discount sign when discounts were applied Added the ability to set multiple favicons depending on stores and/or themes Plugin management: multiple plugins can now be (un)installed in one go Added a field for the HTML body id to store entity (Developer) New property 'Extra...WPF Extended DataGrid: WPF Extended DataGrid 2.0.0.9 binaries: Fixed issue with ICollectionView containg null values (AutoFilter issue)Community TFS Build Extensions: October 2013: The October 2013 release contains Scripts - a new addition to our delivery. These are a growing library of PowerShell scripts to use with VS2013. See our documentation for more on scripting. VS2010 Activities(target .NET 4.0) VS2012 Activities (target .NET 4.5) VS2013 Activities (target .NET 4.5.1) Community TFS Build Manager VS2012 Community TFS Build Manager VS2013 The Community TFS Build Managers for VS2010, 2012 and 2013 can also be found in the Visual Studio Gallery where upda...WMI Inventory Client: WMI Inventory Client: WMI Inventory Client ?????????????? ?????????????? ?? WMISuperSocket, an extensible socket server framework: SuperSocket 1.6 stable: Changes included in this release: Process level isolation SuperSocket ServerManager (include server and client) Connect to client from server side initiatively Client certificate validation New configuration attributes "textEncoding", "defaultCulture", and "storeLocation" (certificate node) Many bug fixes http://docs.supersocket.net/v1-6/en-US/New-Features-and-Breaking-ChangesFile System Explorer: Beta 1: Try me and please give feedback via Discussions and issues Installation: Download the zip file Unblock it Unzip to a suitable location Just run Filesystemexplorer.exe Enjoy Updates: V1.1 Beta: Fixed some low level file search issuesBarbaTunnel: BarbaTunnel 8.1: Check Version History for more information about this release.Mugen MVVM Toolkit: Mugen MVVM Toolkit 2.1: v 2.1 Added the 'Should' class instead of the 'Validate' class. The 'Validate' class is now obsolete. Added 'Toolkit.Annotations' to support the Mugen MVVM Toolkit ReSharper plugin. Updated JetBrains annotations within the project. Added the 'GlobalSettings.DefaultActivationPolicy' property to represent the default activation policy. Removed the 'GetSettings' method from the 'ViewModelBase' class. Instead of it, the 'GlobalSettings.DefaultViewModelSettings' property is used. Updated...SharePoint User Permission Check: SP User Permission Check: Modified Current.Web.Title in output label.TFS Event Workflows: TFS Event Workflows 0.10.41576.0 - TFS 2012-2013: Supports TFS 2012 and TFS 2013 For a TFS 2010 version look at https://tfseventworkflows.codeplex.com/releases/view/102444 New Features multiple application tiers storage of workflows, configurations and activities in the version control support for async execution in TFS job agent selection of collection/project filters in config file simple disable in config file simplified configurationCrowd CMS: Crowd CMS FREE - Official Release: This is the original source files for Crowd CMS Free (v1.0.0) and is the latest stable release which has been bug-tested and fixed.Sea Dragon AJAX Viewer Web Part: Sea Dragon Ajax Web Part: The Seadragon Viewer WSP and companion literature. There are three seperate guides that explain how to get up and running: - Seadragon Viewer Web Part Installation Guide Creating Deep Zoom Images for Seadragon Viewer Bringing it all together These guides are currently very basic and are offered as guides for getting full usage out Seadragon. Please post any suggestions for further documentation in the discussions forumProject Nonnon: 2013_10_30: ----------==========----------==========----------==========---------- "No news is good news." ----------==========----------==========----------==========---------- Change Log 2013/10/30 BUGFIX win32/explorer.c n_explorer_path_get() : comment OLD : typo NEW : fixed Felis Win8 or latert : Link Maker OLD : not function in some cases NEW : fixed Nyaurism Formatter : byte count label : when resized OLD : text will be broken NEW : fixed NEW_FEATURE win...NAudio: NAudio 1.7: full release notes available at http://mark-dot-net.blogspot.co.uk/2013/10/naudio-17-release-notes.htmlFormula Calculation Toolkit: FCT Library: Alfa release for Formula Calculation Toolkit.WebExtras: v1.3.0 Beta: Enh: Adding support for Bootstrap 3.x Enh: Adding support for Gumby 2.5.x Enh: Adding a new string extension Remove() to remove occurences of given string Enh: Adding a ToDictionary() for name value collections Enh: The dataTable sort extension is now a little more intelligent and robust Enh: General under the hood code enhancements Fix: Hyperlink extensions now handle MVC Areas Fix: Marking JsFunc as serializable otherwise when using the ASP.NET State Server, the object does no...DirectX Tool Kit: October 2013: October 28, 2013 Updated for Visual Studio 2013 and Windows 8.1 SDK RTM Added DGSLEffect, DGSLEffectFactory, VertexPositionNormalTangentColorTexture, and VertexPositionNormalTangentColorTextureSkinning Model loading and effect factories support loading skinned models MakeSpriteFont now has a smooth vs. sharp antialiasing option: /sharp Model loading from CMOs now handles UV transforms for texture coordinates A number of small fixes for EffectFactory Minor code and project cleanup ...ExtJS based ASP.NET Controls: FineUI v4.0beta1: +2013-10-28 v4.0 beta1 +?????Collapsed???????????????。 -????:window/group_panel.aspx??,???????,???????,?????????。 +??????SelectedNodeIDArray???????????????。 -????:tree/checkbox/tree_checkall.aspx??,?????,?????,????????????。 -??TimerPicker???????(????、????ing)。 -??????????????????????(???)。 -?????????????,??type=text/css(??~`)。 -MsgTarget???MessageTarget,???None。 -FormOffsetRight?????20px??5px。 -?Web.config?PageManager??FormLabelAlign???。 -ToolbarPosition??Left/Right。 -??Web.conf...CODE Framework: 4.0.31028.0: See change notes in the documentation section for details on what's new. Note: If you download the class reference help file with, you have to right-click the file, pick "Properties", and then unblock the file, as many browsers flag the file as blocked during download (for security reasons) and thus hides all content.New Projects.NET Site Storage: Write .NET code to use an abstract storage system that can work with a variety of storage, such as local file system and Azure blob.Asp.net Mvc Ajax Infinite Scroll: Asp.net Mvc 4 Ajax Json Infinite Scrollblueblue: tetCar: CArCSharp Generic Data Access: Yet another generic data access for .NETDotNet Manuals: DotNet Manuals aims to provide developers an easy way to create, manage and distribute manuals and various documentation for their applications and libraries.EIB Watcher .Net Library: .Net Library for EIB/NKX bus accessopenGamification: This intends to become an open source gamification framework based on TypeScript.PaginaWebCursoNetAcc: aSimple Person Manager: The application accepts POST requests with JSON data about a Person, stored the values in Azure Table Storage and accepts GET requests to retrieve it back.Stock Track: If you have a small retail store with simple stock management and tracking requirements, this program might work for you. Stock Track easily categorises your ptrapawebapp: testVisualStateManager: This is a simple, but quite powerful mechanism allowing you to separate the application UI from application logic in Windows Forms.Yet Another VirtualBox ToolSet: This is going to be another VirtualBox Toolset.Zodinet: Co ca ngua

    Read the article

  • CascadingDropDown jQuery Plugin for ASP.NET MVC

    - by rajbk
    CascadingDropDown is a jQuery plugin that can be used by a select list to get automatic population using AJAX. A sample ASP.NET MVC project is attached at the bottom of this post.   Usage The code below shows two select lists : <select id="customerID" name="customerID"> <option value="ALFKI">Maria Anders</option> <option value="ANATR">Ana Trujillo</option> <option value="ANTON">Antonio Moreno</option> </select>   <select id="orderID" name="orderID"> </select> When a customer is selected in the first select list, the second list will auto populate itself with the following code: $("#orderID").CascadingDropDown("#customerID", '/Sales/AsyncOrders'); Internally, an AJAX post is made to ‘/Sales/AsyncOrders’ with the post body containing  customerID=[selectedCustomerID]. This executes the action AsyncOrders on the SalesController with signature AsyncOrders(string customerID).  The AsyncOrders method returns JSON which is then used to populate the select list. The JSON format expected is shown below : [{ "Text": "John", "Value": "10326" }, { "Text": "Jane", "Value": "10801" }] Details $(targetID).CascadingDropDown(sourceID, url, settings) targetID The ID of the select list that will auto populate.  sourceID The ID of the select list, which, on change, causes the targetID to auto populate. url The url to post to Options promptText Text for the first item in the select list Default : -- Select -- loadingText Optional text to display in the select list while it is being loaded. Default : Loading.. errorText Optional text to display if an error occurs while populating the list Default: Error loading data. postData Data you want posted to the url in place of the default Example : { postData : { customerID : $(‘#custID’), orderID : $(‘#orderID’) }} will cause customerID=ALFKI&orderID=2343 to be sent as the POST body. Default: A text string obtained by calling serialize on the sourceID onLoading (event) Raised before the list is populated. onLoaded (event) Raised after the list is populated, The code below shows how to “animate” the  select list after load. Example using custom options: $("#orderID").CascadingDropDown("#customerID", '/Sales/AsyncOrders', { promptText: '-- Pick an Order--', onLoading: function () { $(this).css("background-color", "#ff3"); }, onLoaded: function () { $(this).animate({ backgroundColor: '#ffffff' }, 300); } }); To return JSON from our action method, we use the Json ActionResult passing in an IEnumerable<SelectListItem>. public ActionResult AsyncOrders(string customerID) { var orders = repository.GetOrders(customerID).ToList().Select(a => new SelectListItem() { Text = a.OrderDate.HasValue ? a.OrderDate.Value.ToString("MM/dd/yyyy") : "[ No Date ]", Value = a.OrderID.ToString(), }); return Json(orders); } Sample Project using VS 2010 RTM NorthwindCascading.zip

    Read the article

  • Cocos2d-xna memory management for WP8

    - by Arkiliknam
    I recently upgraded to VS2012 and try my in dev game out on the new WP8 emulators but was dismayed to find out the emulator now crashes and throws an out of memory exception during my sprite loading procedure (funnily, it still works in WP7 emulators and on my WP7). Regardless of whether the problem is the emulator or not, I want to get a clear understanding of how I should be managing memory in the game. My game consists of a character whom has 4 or more different animations. Each animation consists of 4 to 7 frames. On top of that, the character has up to 8 stackable visualization modifications (eg eye type, nose type, hair type, clothes type). Pre memory issue, I preloaded all textures for each animation frame and customization and created animate action out of them. The game then plays animations using the customizations applied to that current character. I re-looked at this implementation when I received the out of memory exceptions and have started playing with RenderTexture instead, so instead of pre loading all possible textures, it on loads textures needed for the character, renders them onto a single texture, from which the animation is built. This means the animations use 1/8th of the sprites they were before. I thought this would solve my issue, but it hasn't. Here's a snippet of my code: var characterTexture = CCRenderTexture.Create((int)width, (int)height); characterTexture.BeginWithClear(0, 0, 0, 0); // stamp a body onto my texture var bodySprite = MethodToCreateSpecificSprite(); bodySprite.Position = centerPoint; bodySprite.Visit(); bodySprite.Cleanup(); bodySprite = null; // stamp eyes, nose, mouth, clothes, etc... characterTexture.End(); As you can see, I'm calling CleanUp and setting the sprite to null in the hope of releasing the memory, though I don't believe this is the right way, nor does it seem to work... I also tried using SharedTextureCache to load textures before Stamping my texture out, and then clearing the SharedTextureCache with: CCTextureCache.SharedTextureCache.RemoveAllTextures(); But this didn't have an effect either. Any tips on what I'm not doing? I used VS to do a memory profile of the emulation causing the crash. Both WP7.1 and WP8 emulators peak at about 150mb of usage. WP8 crashes and throws an out of memory exception. Each customisation/frame is 15kb at the most. Lets say there are 8 layers of customisation = 120kb but I render then onto one texture which I would assume is only 15kb again. Each animation is 8 frames at the most. That's 15kb for 1 texture, or 960kb for 8 textures of customisation. There are 4 animation sets. That's 60Kb for 4 sets of 1 texture, or 3.75MB for 4 sets of 8 textures of customisation. So even if its storing every layer, its 3.75MB.... no where near the 150mb breaking point my profiler seems to suggest :( WP 7.1 Memory Profile (max 150MB) WP8 Memory Profile (max 150MB and crashes)

    Read the article

  • Silverlight Cream for March 24, 2010 -- #819

    - by Dave Campbell
    In this Issue: Nokola, Tim Heuer, Christian Schormann, Brad Abrams, David Kelley, Phil Middlemiss, Michael Klucher, Brandon Watson, Kunal Chowdhury, Jacek Ciereszko, and Unni. Shoutouts: Michael Klucher has a short post up For Love of the Game (Development)…, where he's looking for some input from the developer community. Shawn Hargreaves has a link post up of all the Windows Phone MIX10 presentations Chris Cavanagh has a Soft-Body Physics for Windows Phone 7 post up that goes along with one he did 1-1/2 years ago! Jeff Weber posted An Open Letter To Microsoft Regarding The Silverlight Game Development Community Pete Brown posted his MIX10 Recap ... lots of information, and discussion of what he was up to ... I liked the Trivia app Pete... glad to hear that was yours :) I've changed my mind and added a WP7 tag to SilverlightCream. I'll straighten out all the Mobile plus Silverlight links to point at the WP7 tab hopefully tonight. From SilverlightCream.com: EasyPainter Source Pack 3: Adorners, Mouse Cursors and Frames Nokola has been busy with EasyPainter adding in Custom, Extensible Mouse Cursors and Customizable Adorners with extensible adorner frames, and best of all... all with source code! Simulate Geo Location in Silverlight Windows Phone 7 emulator Among the things we don't have in our WP7 emulators is Geo Location... Tim Heuer comes to the rescue with a simulator for it... too cool, Tim! Blend 4: About Path Layout, Part II Christian Schormann is back with Part 2 of his tutorial sequence on the new Path Layout. Really good info and definitely cool presentations of the control. Silverlight 4 + RIA Services - Ready for Business: Exposing OData Services Brad Abrams continues his series with a post on exposing OData services. This looks like a great tutorial on the topic... will probably resolve some questions I've been having :) No Silverlight and Preloader Experience(ish) - in 10 seconds... David Kelley exposes the code he uses on his site, designed to be friendly to Silverlight and non-Silverlight users alike. Merged Dictionaries of Style Resources and Blend Phil Middlemiss has a nice article up on Merged Dictionaries and using multiple resource dictionaries that the app chooses, but also be compatible with Prism and Blend while not eating your system resources out of house and home. XNA Game Studio and Windows Phone Emulator Compatibility Michael Klucher has a definitive post up about getting your XNA and system up-to-speed for WP7... a must-read if you've been running any of the other XNA drops. Windows Phone 7 301 Redirect Bug Brandon Watson reports a 301 Redirect bug on WP7 ... see the code and how he got it, then follow along as he explains all the debug paths he took and what the resolution (?) really is :) Silverlight 4: How to use the new Printing API? Kunal Chowdhury has a tutorial up on printing with Silverlight 4 RC... from the project layout to printing and then printing a smaller section... all good Printing problem in Silverlight 4.0 RC - loading images in code behind Jacek Ciereszko also is writing about printing, and in his case he had problems with loading an image dynamically and printing it... plus he provides a solution to the 'blank page' problem. ToolboxExampleAttribute - a new extension point in Blend 4 (and a few other extensibility related changes) Unni has an article up about Expression Blend 4's new ToolboxExampleAttribute which allow you to have multiple examples of the same type resulting in different XAML produced. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone    MIX10

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Anatomy of a serialization killer

    - by Brian Donahue
    As I had mentioned last month, I have been working on a project to create an easy-to-use managed debugger. It's still an internal tool that we use at Red Gate as part of product support to analyze application errors on customer's computers, and as such, should be easy to use and not require installation. Since the project has got rather large and important, I had decided to use SmartAssembly to protect all of my hard work. This was trivial for the most part, but the loading and saving of results was broken by SA after using the obfuscation, rendering the loading and saving of XML results basically useless, although the merging and error reporting was an absolute godsend and definitely worth the price of admission. (Well, I get my Red Gate licenses for free, but you know what I mean!)My initial reaction was to simply exclude the serializable results class and all of its' members from obfuscation, and that was just dandy, but a few weeks on I decided to look into exactly why serialization had broken and change the code to work with SA so I could write any new code to be compatible with SmartAssembly and save me some additional testing and changes to the SA project.In simple terms, SA does all that it can to prevent serialization problems, for instance, it will not obfuscate public members of a DLL and it will exclude any types with the Serializable attribute from obfuscation. This prevents public members and properties from being made private and having the name changed. If the serialization is done inside the executable, however, public members have the access changed to private and are renamed. That was my first problem, because my types were in the executable assembly and implemented ISerializable, but did not have the Serializable attribute set on them!public class RedFlagResults : ISerializable        {        }The second problem caused by the pruning feature. Although RedFlagResults had public members, they were not truly properties, and used the GetObjectData() method of ISerializable to serialize the members. For that reason, SA could not exclude these members from pruning and further broke the serialization. public class RedFlagResults : ISerializable        {                public List<RedFlag.Exception> Exceptions;                 #region ISerializable Members                 public void GetObjectData(SerializationInfo info, StreamingContext context)                {                                info.AddValue("Exceptions", Exceptions);                }                 #endregionSo to fix this, it was necessary to make Exceptions a proper property by implementing get and set on it. Also, I added the Serializable attribute so that I don't have to exclude the class from obfuscation in the SA project any more. The DoNotPrune attribute means I do not need to exclude the class from pruning.[Serializable, SmartAssembly.Attributes.DoNotPrune]        public class RedFlagResults        {                public List<RedFlag.Exception> Exceptions {get;set;}        }Similarly, the Exception class gets the Serializable and DoNotPrune attributes applied so all of its' properties are excluded from obfuscation.Now my project has some protection from prying eyes by scrambling up the code so it's harder to reverse-engineer, without breaking anything. SmartAssembly has also provided the benefit of merging so that the end-user doesn't need to extract all of the DLL files needed by RedFlag into a directory, and can be run directly from the .zip archive. When an error occurs (hey, I'm only human!), an exception report can be sent to me so I can see what went wrong without having to, er, debug the debugger.

    Read the article

  • Oracle Big Data Software Downloads

    - by Mike.Hallett(at)Oracle-BI&EPM
    Companies have been making business decisions for decades based on transactional data stored in relational databases. Beyond that critical data, is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and photographs that can be mined for useful information. Oracle offers a broad integrated portfolio of products to help you acquire and organize these diverse data sources and analyze them alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data Connectors Downloads here, includes: Oracle SQL Connector for Hadoop Distributed File System Release 2.1.0 Oracle Loader for Hadoop Release 2.1.0 Oracle Data Integrator Companion 11g Oracle R Connector for Hadoop v 2.1 Oracle Big Data Documentation The Oracle Big Data solution offers an integrated portfolio of products to help you organize and analyze your diverse data sources alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data, Release 2.2.0 - E41604_01 zip (27.4 MB) Integrated Software and Big Data Connectors User's Guide HTML PDF Oracle Data Integrator (ODI) Application Adapter for Hadoop Apache Hadoop is designed to handle and process data that is typically from data sources that are non-relational and data volumes that are beyond what is handled by relational databases. Typical processing in Hadoop includes data validation and transformations that are programmed as MapReduce jobs. Designing and implementing a MapReduce job usually requires expert programming knowledge. However, when you use Oracle Data Integrator with the Application Adapter for Hadoop, you do not need to write MapReduce jobs. Oracle Data Integrator uses Hive and the Hive Query Language (HiveQL), a SQL-like language for implementing MapReduce jobs. Employing familiar and easy-to-use tools and pre-configured knowledge modules (KMs), the application adapter provides the following capabilities: Loading data into Hadoop from the local file system and HDFS Performing validation and transformation of data within Hadoop Loading processed data from Hadoop to an Oracle database for further processing and generating reports Oracle Database Loader for Hadoop Oracle Loader for Hadoop is an efficient and high-performance loader for fast movement of data from a Hadoop cluster into a table in an Oracle database. It pre-partitions the data if necessary and transforms it into a database-ready format. Oracle Loader for Hadoop is a Java MapReduce application that balances the data across reducers to help maximize performance. Oracle R Connector for Hadoop Oracle R Connector for Hadoop is a collection of R packages that provide: Interfaces to work with Hive tables, the Apache Hadoop compute infrastructure, the local R environment, and Oracle database tables Predictive analytic techniques, written in R or Java as Hadoop MapReduce jobs, that can be applied to data in HDFS files You install and load this package as you would any other R package. Using simple R functions, you can perform tasks such as: Access and transform HDFS data using a Hive-enabled transparency layer Use the R language for writing mappers and reducers Copy data between R memory, the local file system, HDFS, Hive, and Oracle databases Schedule R programs to execute as Hadoop MapReduce jobs and return the results to any of those locations Oracle SQL Connector for Hadoop Distributed File System Using Oracle SQL Connector for HDFS, you can use an Oracle Database to access and analyze data residing in Hadoop in these formats: Data Pump files in HDFS Delimited text files in HDFS Hive tables For other file formats, such as JSON files, you can stage the input in Hive tables before using Oracle SQL Connector for HDFS. Oracle SQL Connector for HDFS uses external tables to provide Oracle Database with read access to Hive tables, and to delimited text files and Data Pump files in HDFS. Related Documentation Cloudera's Distribution Including Apache Hadoop Library HTML Oracle R Enterprise HTML Oracle NoSQL Database HTML Recent Blog Posts Big Data Appliance vs. DIY Price Comparison Big Data: Architecture Overview Big Data: Achieve the Impossible in Real-Time Big Data: Vertical Behavioral Analytics Big Data: In-Memory MapReduce Flume and Hive for Log Analytics Building Workflows in Oozie

    Read the article

  • ODI 11g – Oracle Multi Table Insert

    - by David Allan
    With the IKM Oracle Multi Table Insert you can generate Oracle specific DML for inserting into multiple target tables from a single query result – without reprocessing the query or staging its result. When designing this to exploit the IKM you must split the problem into the reusable parts – the select part goes in one interface (I named SELECT_PART), then each target goes in a separate interface (INSERT_SPECIAL and INSERT_REGULAR). So for my statement below… /*INSERT_SPECIAL interface */ insert  all when 1=1 And (INCOME_LEVEL > 250000) then into SCOTT.CUSTOMERS_NEW (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) values (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) /* INSERT_REGULAR interface */ when 1=1  then into SCOTT.CUSTOMERS_SPECIAL (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) values (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) /*SELECT*PART interface */ select        CUSTOMERS.EMAIL EMAIL,     CUSTOMERS.CREDIT_LIMIT CREDIT_LIMIT,     UPPER(CUSTOMERS.NAME) NAME,     CUSTOMERS.USER_MODIFIED USER_MODIFIED,     CUSTOMERS.DATE_MODIFIED DATE_MODIFIED,     CUSTOMERS.BIRTH_DATE BIRTH_DATE,     CUSTOMERS.MARITAL_STATUS MARITAL_STATUS,     CUSTOMERS.ID ID,     CUSTOMERS.USER_CREATED USER_CREATED,     CUSTOMERS.GENDER GENDER,     CUSTOMERS.DATE_CREATED DATE_CREATED,     CUSTOMERS.INCOME_LEVEL INCOME_LEVEL from    SCOTT.CUSTOMERS   CUSTOMERS where    (1=1) Firstly I create a SELECT_PART temporary interface for the query to be reused and in the IKM assignment I state that it is defining the query, it is not a target and it should not be executed. Then in my INSERT_SPECIAL interface loading a target with a filter, I set define query to false, then set true for the target table and execute to false. This interface uses the SELECT_PART query definition interface as a source. Finally in my final interface loading another target I set define query to false again, set target table to true and execute to true – this is the go run it indicator! To coordinate the statement construction you will need to create a package with the select and insert statements. With 11g you can now execute the package in simulation mode and preview the generated code including the SQL statements. Hopefully this helps shed some light on how you can leverage the Oracle MTI statement. A similar IKM exists for Teradata. The ODI IKM Teradata Multi Statement supports this multi statement request in 11g, here is an extract from the paper at www.teradata.com/white-papers/born-to-be-parallel-eb3053/ Teradata Database offers an SQL extension called a Multi-Statement Request that allows several distinct SQL statements to be bundled together and sent to the optimizer as if they were one. Teradata Database will attempt to execute these SQL statements in parallel. When this feature is used, any sub-expressions that the different SQL statements have in common will be executed once, and the results shared among them. It works in the same way as the ODI MTI IKM, multiple interfaces orchestrated in a package, each interface contributes some SQL, the last interface in the chain executes the multi statement.

    Read the article

  • 12.04 installation started to black screen during boot today

    - by Cedric
    NOTE: Most of this question is now irrelevant. UPDATE 3 summarizes the problem as it stands. I've been running 12.04 on my Lenovo laptop for one month now (updated from 11.04), and I have not had any significant problem until today. This morning, when I boot, I pass the Grub screen, then I get to the purple loading screen with dots as usual, then for some reason I got to the terminal login, with no GUI. startx gives me a black screen. Ctrl+F7-F8 didn't help either. It's similar to: After the update today no graphical interface anymore - 12.04 I followed the instructions at the end, to flush the ATI drivers (which I had installed), and fall back to the community drivers. That made me lose the login! Now I just get a black screen after the Ubuntu loading screen. I can still access the console through recovery, and I've gotten into VESA mode once or twice (not reproducible, for some reason). I've tried various permutations of xorg.conf, without success. Xorg -configure fails for now, though I might be able to get it to work. apt-get update/upgrade doesn't improve anything either. However, both Windows and the 12.04 Live CD still work beautifully, and I know that all my data is still there. Is there any way that I could somehow take the configuration from the Live CD and roll with it? I know that I could reinstall, but that sucks, frankly, especially given that there's no straight-forward way of keeping the home (which, incidentally, is unaccessible from the Live CD) Thank you. Update: it seems that the fglrx drivers are still active, even after I've --purged them. From Xorg.0.log: [ 18.235] (WW) fglrx(0): *********************************************************** [ 18.235] (WW) fglrx(0): * DRI initialization failed * [ 18.235] (WW) fglrx(0): * kernel module (fglrx.ko) may be missing or incompatible * [ 18.235] (WW) fglrx(0): * 2D and 3D acceleration disabled * [ 18.235] (WW) fglrx(0): *********************************************************** [ 18.235] Fatal server error: [ 18.235] AddScreen/ScreenInit failed for driver 0 There's also a mention of the "fbdev" module. What is it? PARTIALLY SOLVED: I've undone the damage from the fglrx purge. I'm still mystified as to why uninstalling the packages didn't kill fglrx entirely, but I've now recovered the prompt. The solution to the DRI initialization error was to add radeon.modeset=0 to the GRUB boot options. So I'm back to being dropped to a prompt without any GUI. startx gives me a bunch of messages, though no obvious errors. I have little reason to suspect the video drivers, as they worked fine before today. There is no apparent error message in any of the log files. UPDATE: When I startx, I get an error, Plymounth command failed mountall: Disconnected from Plymouth This is all over the Internet, but I have not found anything that works for me yet. UPDATE 3: If I press ESC during boot, the splash screen (Plymouth!) disappears, and I no longer have any error from Plymouth. The last error message is: Stopping mount filesystems on boot I can then Ctrl+Alt+F1 to get the TTY1, but startx still does not work. Sadly, the Internet knows nothing about this error message, and neither do I. Help!

    Read the article

  • Ubuntu 12.04 Install - Black Screen - No Grub - Have run Boot Repair Disk

    - by Pat
    Day 4 of my purgatory. History: Had problems with live CD at first, had to set the "nomodeset" option ... and then it worked fine. Installed Ubuntu "Alongside" Windows XP from live CD (NOT wubi) Upon reboot after installation, I get the BIOS ... and then a black screen. If I hit shift after the BIOS screen I get text that says "loading GRUB ..", but then no GRUB ... just a black screen. What I have tried to do: Total re-installation ... 3 times now. Also tried with wubi, same black screen. Have gone back to the normal (non-wubi) install. After installation, I tried re-booting the live cd ... and trying to change GRUB file using: sudo gedit /etc/default/grub ... to "nomodeset" and "timeout=10" ... but won't let me save my changes because I'm using the live cd "in memory" system and don't have permissions to the disks (I think). I tried logging in ... but it won't let me. I then read many posts on this site. I'm stuck. This morning, I ran the "boot repair disk". Results here: http://paste.ubuntu.com/1132333/ What I think is wrong: Since I can get the live CD to run (perfectly) with the "nomodeset" option, I think all I need to do is get to GRUB to change that ... but I can't get to GRUB. Appreciate any advice. Pat Day 5 ... I downloaded "Super Grub 2 Disk" from: http://www.supergrubdisk.org/super-grub2-disk/ This looks promising. I can boot the disk and it brings me to a GRUB program that allows me to: 1) Boot to Windows ... which works 2) Boot to Ubuntu ... which does NOT work When I choose boot to Ubuntu, I get lines across the screen which is an obvious video card problem. Likely because I need to set the "nomodeset" option. So, I attempted to use super grub2 to edit the grub file ... but it is TOTALLY different than the Ubuntu grub file ... and I don't know where to put the "nomodeset" option. Still stuck ... The bottom line is that: 1) I need to edit /etc/default/grub on sda(1) ... which is my boot drive ... to add the "nomodeset" option 2) To do that I need to get into grub ... but, I can't. Holding down shift just echo's "loading grub .." and then takes me to a black screen 3) I can boot to the live CD by setting nomodeset .... but I cannot access the hard disk as root ... I can't save my changes! Can anyone tell me how to login as root for the filesystem from the live CD ... so I can edit the grub file on the HARD DISK ... and then run update-grub??

    Read the article

  • C++ problem with assimp 3D model loader

    - by Brendan Webster
    In my game I have model loading functions for Assimp model loading library. I can load the model and render it, but the model displays incorrectly. The models load in as if they were using a seperate projection matrix. I have looked over my code over and over again, but I probably keep on missing the obvious reason why this is happening. Here is an image of my game: It's simply a 6 sided cube, but it's off big time! Here are my code snippets for rendering the cube to the screen: void C_MediaLoader::display(void) { float tmp; glTranslatef(0,0,0); // rotate it around the y axis glRotatef(angle,0.f,0.f,1.f); glColor4f(1,1,1,1); // scale the whole asset to fit into our view frustum tmp = scene_max.x-scene_min.x; tmp = aisgl_max(scene_max.y - scene_min.y,tmp); tmp = aisgl_max(scene_max.z - scene_min.z,tmp); tmp = (1.f / tmp); glScalef(tmp/5, tmp/5, tmp/5); // center the model //glTranslatef( -scene_center.x, -scene_center.y, -scene_center.z ); // if the display list has not been made yet, create a new one and // fill it with scene contents if(scene_list == 0) { scene_list = glGenLists(1); glNewList(scene_list, GL_COMPILE); // now begin at the root node of the imported data and traverse // the scenegraph by multiplying subsequent local transforms // together on GL's matrix stack. recursive_render(scene, scene->mRootNode); glEndList(); } glCallList(scene_list); } void C_MediaLoader::recursive_render (const struct aiScene *sc, const struct aiNode* nd) { unsigned int i; unsigned int n = 0, t; struct aiMatrix4x4 m = nd->mTransformation; // update transform aiTransposeMatrix4(&m); glPushMatrix(); glMultMatrixf((float*)&m); // draw all meshes assigned to this node for (; n < nd->mNumMeshes; ++n) { const struct aiMesh* mesh = scene->mMeshes[nd->mMeshes[n]]; apply_material(sc->mMaterials[mesh->mMaterialIndex]); if(mesh->mNormals == NULL) { glDisable(GL_LIGHTING); } else { glEnable(GL_LIGHTING); } for (t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace* face = &mesh->mFaces[t]; GLenum face_mode; switch(face->mNumIndices) { case 1: face_mode = GL_POINTS; break; case 2: face_mode = GL_LINES; break; case 3: face_mode = GL_TRIANGLES; break; default: face_mode = GL_POLYGON; break; } glBegin(face_mode); for(i = 0; i < face->mNumIndices; i++) { int index = face->mIndices[i]; if(mesh->mColors[0] != NULL) glColor4fv((GLfloat*)&mesh->mColors[0][index]); if(mesh->mNormals != NULL) glNormal3fv(&mesh->mNormals[index].x); glVertex3fv(&mesh->mVertices[index].x); } glEnd(); } } // draw all children for (n = 0; n < nd->mNumChildren; ++n) { recursive_render(sc, nd->mChildren[n]); } glPopMatrix(); } Sorry there is so much code to look through, but I really cannot find the problem, and I would love to have help.

    Read the article

  • Tweaking Remote Control (In-Kernel LIRC)

    - by Geoff
    I've recently rebuilt my MythTV box using Mythbuntu 12.04, to take advantage of newer hardware (Ivy Bridge). On my previous build I used lirc to manage the remote, i.e. the mapping of key codes - keypresses - application keys; it was quite a journey to learn it all, and I ended up fairly comfortable with how it all worked. What I have: I have a cheap Chinavasion remote and USB dongle, which I've found several articles on; these largely revolve around working with XBMC (interesting, but I don't think directly applicable) and also around getting a Harmony remote to work (it's a Chinavasion CVSB-983 - very useful, since I needed this to get my Harmony 900 working). Mythbuntu 12.04 64-bit MythTV 0.25 (likely irrelevant) How it is right now When I plug this in, it 'just works'. Which is great, except that Ubuntu uses it natively, and prevents some of the button presses from getting through to Myth. For example, I can send a button from the remote that equates to Ctrl-Alt-A (which I assume Ubuntu isn't interested in), and then trap that in Mythfrontend, but the remote's Play button is caught by Ubuntu (which displays a large circle with a line though it, as there's no media player loaded). I understand that this is because lirc is merged into the kernel now, and I like that. What I've done so far: Found the device using lsusb: $ lsusb Bus 001 Device 004: ID 073a:2230 Chaplet Systems, Inc. infrared dongle for remote Found the event device number: $ cat /proc/bus/input/devices I: Bus=0003 Vendor=073a Product=2230 Version=0110 N: Name="HID 073a:2230" P: Phys=usb-0000:00:1a.0-1.2/input0 S: Sysfs=/devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.2/1-1.2:1.0/input/input5 U: Uniq= H: Handlers=sysrq kbd mouse1 event5 js0 B: PROP=0 B: EV=10001f B: KEY=4c37fff072ff32d bf54445600000000 ffffffffff 30c100b17c007 ffa67bfad951dfff febeffdfffefffff fffffffffffffffe B: REL=343 B: ABS=100030000 B: MSC=10 Tested the input with evtest (I pressed Play): $ sudo evtest /dev/input/event5 Input driver version is 1.0.1 Input device ID: bus 0x3 vendor 0x73a product 0x2230 version 0x110 Input device name: "HID 073a:2230" Supported events: Event type 0 (EV_SYN) Event type 1 (EV_KEY) Event code 1 (KEY_ESC) Event code 2 (KEY_1) Event code 3 (KEY_2) Event code 4 (KEY_3) Event code 5 (KEY_4) Event code 6 (KEY_5) Event code 7 (KEY_6) <------------snipped lots of 'Event code' lines------------> Testing ... (interrupt to exit) Event: time 1336435683.230656, -------------- SYN_REPORT ------------ Event: time 1336435683.246648, type 4 (EV_MSC), code 4 (MSC_SCAN), value c00cd Event: time 1336435683.246652, type 1 (EV_KEY), code 164 (KEY_PLAYPAUSE), value 0 Event: time 1336435683.246655, -------------- SYN_REPORT ------------ Tested showkey, again for the Play key: $ sudo showkey -s kb mode was RAW [ if you are trying this under X, it might not work since the X server is also reading /dev/console ] press any key (program terminates 10s after last keypress)... 0xe0 0x22 0xe0 0xa2 What I want: I'd like a way to scan the incoming button presses, if the above method isn't correct. I'd like to either remap each button press to something that Ubuntu/Unity will ignore, or even better pass the keypress directly to Myth (I suspect this later is only possible with lirc, but I could be wrong). I would really like to do this with the in-kernel drivers, i.e. without explicitly loading lirc; if that's the way the world is going, I'd rather find a way to map the current behaviour to what I want, rather than forcing the 'old' arrangement of loading lirc outside the kernel. Learning something new is also worthwhile! My guess: I'm assuming that this will require using setkeycodes, but have had trouble finding enough information to configure this. Any help greatly appreciated!

    Read the article

  • Modernizr Rocks HTML5

    - by Laila
    HTML5 is a moving target.  At the moment, we don't know what will be in future versions.  In most circumstances, this really matters to the developer. When you're using Adobe Air, you can be reasonably sure what works, what is there, and what isn't, since you have a version of the browser built-in. With Metro, you can assume that you're going to be using at least IE 10.   If, however,  you are using HTML5 in a web application, then you are going to rely heavily on Feature Detection.  Feature-Detection is a collection of techniques that tell you, via JavaScript, whether the current browser has this feature natively implemented or not Feature Detection isn't just there for the esoteric stuff such as  Geo-location,  progress bars,  <canvas> support,  the new <input> types, Audio, Video, web workers or storage, but is required even for semantic markup, since old browsers make a pigs ear out of rendering this.  Feature detection can't rely just on reading the browser version and inferring from that what works. Instead, you must use JavaScript to check that an HTML5 feature is there before using it.  The problem with relying on the user-agent is that it takes a lot of historical data  to work out what version does what, and, anyway, the user-agent can be, and sometimes is, spoofed. The open-source library Modernizr  is just about the most essential  JavaScript library for anyone using HTML5, because it provides APIs to test for most of the CSS3 and HTML5 features before you use them, and is intelligent enough to alter semantic markup into 'legacy' 'markup  using shims  on page-load  for old browsers. It also allows you to check what video Codecs are installed for playing video. It also provides media queries  and conditional resource-loading (formerly YepNope.js.).  Generally, Modernizr gives you the choice of what you do about browsers that don't support the feature that you want. Often, the best choice is graceful degradation, but the resource-loading feature allows you to dynamically load JavaScript Shims to replace the standard API for missing or defective HTML5 functionality, called 'PolyFills'.  As the Modernizr site says 'Yes, not only can you use HTML5 today, but you can use it in the past, too!' The evolutionary progress of HTML5  requires a more defensive style of JavaScript programming where the programmer adopts a mindset of fearing the worst ( IE 6)  rather than assuming the best, whilst exploiting as many of the new HTML features as possible for the requirements of the site or HTML application.  Why would anyone want the distraction of developing their own techniques to do this when  Modernizr exists to do this for you? Laila

    Read the article

  • Problem with Assimp 3D model loader

    - by Brendan Webster
    In my game I have model loading functions for Assimp model loading library. I can load the model and render it, but the model displays incorrectly. The models load in as if they were using a seperate projection matrix. I have looked over my code over and over again, but I probably keep on missing the obvious reason why this is happening. Here is an image of my game: It's simply a 6 sided cube, but it's off big time! Here are my code snippets for rendering the cube to the screen: void C_MediaLoader::display(void) { float tmp; glTranslatef(0,0,0); // rotate it around the y axis glRotatef(angle,0.f,0.f,1.f); glColor4f(1,1,1,1); // scale the whole asset to fit into our view frustum tmp = scene_max.x-scene_min.x; tmp = aisgl_max(scene_max.y - scene_min.y,tmp); tmp = aisgl_max(scene_max.z - scene_min.z,tmp); tmp = (1.f / tmp); glScalef(tmp/5, tmp/5, tmp/5); // center the model //glTranslatef( -scene_center.x, -scene_center.y, -scene_center.z ); // if the display list has not been made yet, create a new one and // fill it with scene contents if(scene_list == 0) { scene_list = glGenLists(1); glNewList(scene_list, GL_COMPILE); // now begin at the root node of the imported data and traverse // the scenegraph by multiplying subsequent local transforms // together on GL's matrix stack. recursive_render(scene, scene->mRootNode); glEndList(); } glCallList(scene_list); } void C_MediaLoader::recursive_render (const struct aiScene *sc, const struct aiNode* nd) { unsigned int i; unsigned int n = 0, t; struct aiMatrix4x4 m = nd->mTransformation; // update transform aiTransposeMatrix4(&m); glPushMatrix(); glMultMatrixf((float*)&m); // draw all meshes assigned to this node for (; n < nd->mNumMeshes; ++n) { const struct aiMesh* mesh = scene->mMeshes[nd->mMeshes[n]]; apply_material(sc->mMaterials[mesh->mMaterialIndex]); if(mesh->mNormals == NULL) { glDisable(GL_LIGHTING); } else { glEnable(GL_LIGHTING); } for (t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace* face = &mesh->mFaces[t]; GLenum face_mode; switch(face->mNumIndices) { case 1: face_mode = GL_POINTS; break; case 2: face_mode = GL_LINES; break; case 3: face_mode = GL_TRIANGLES; break; default: face_mode = GL_POLYGON; break; } glBegin(face_mode); for(i = 0; i < face->mNumIndices; i++) { int index = face->mIndices[i]; if(mesh->mColors[0] != NULL) glColor4fv((GLfloat*)&mesh->mColors[0][index]); if(mesh->mNormals != NULL) glNormal3fv(&mesh->mNormals[index].x); glVertex3fv(&mesh->mVertices[index].x); } glEnd(); } } // draw all children for (n = 0; n < nd->mNumChildren; ++n) { recursive_render(sc, nd->mChildren[n]); } glPopMatrix(); } Sorry there is so much code to look through, but I really cannot find the problem, and I would love to have help.

    Read the article

  • Ubuntu 10.04 and fedora 14 grub conflict

    - by sawren
    I tried to triple boot Windows xp, Fedora 14 and Ubuntu 10.04. I first installed Windows xp, then fedora followed by Ubuntu. The problem is that i don't get option to boot Ubuntu while Xp boots fine. It seems Ubuntu was unable to replace Fedora's grub with its own at MBR. Looking at their grub conf file, Fedora and Ubuntu identifies same harddisk as two different devices and i do have another 80 GB harddisk which doesn't have any OS. Below is the details on my partitions and partial information from grub files of both OS. Device Boot Start End Blocks Id System /dev/sda1 * 63 40965749 20482843+ 7 HPFS/NTFS /dev/sda2 102414436 312576704 105081134+ f W95 Ext'd (LBA) /dev/sda3 40965750 102414374 30724312+ 83 Linux - /Home (for fedora) /dev/sda5 102414438 204812684 51199123+ 7 HPFS/NTFS /dev/sda6 204812748 253634219 24410736 83 Linux -- ubuntu /dev/sda7 253634283 302455754 24410736 83 Linux -- fedora /dev/sda8 302455818 312576704 5060443+ 82 Linux swap / Solaris grub.cfg from ubuntu ### BEGIN /etc/grub.d/10_linux ### menuentry 'Ubuntu, with Linux 2.6.32-21-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod ext2 set root='(hd1,7)' search --no-floppy --fs-uuid --set cd55e078-a2c1-4d8a-9e87-ae838b6f4a05 linux /boot/vmlinuz-2.6.32-21-generic root=UUID=cd55e078-a2c1-4d8a-9e87-ae838b6f4a05 ro quiet splash initrd /boot/initrd.img-2.6.32-21-generic } menuentry 'Ubuntu, with Linux 2.6.32-21-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod ext2 set root='(hd1,7)' search --no-floppy --fs-uuid --set cd55e078-a2c1-4d8a-9e87-ae838b6f4a05 echo 'Loading Linux 2.6.32-21-generic ...' linux /boot/vmlinuz-2.6.32-21-generic root=UUID=cd55e078-a2c1-4d8a-9e87-ae838b6f4a05 ro single echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.32-21-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod ext2 set root='(hd1,7)' search --no-floppy --fs-uuid --set cd55e078-a2c1-4d8a-9e87-ae838b6f4a05 linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod ext2 set root='(hd1,7)' search --no-floppy --fs-uuid --set cd55e078-a2c1-4d8a-9e87-ae838b6f4a05 linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Microsoft Windows XP Professional (on /dev/sdb1)" { insmod ntfs set root='(hd1,1)' search --no-floppy --fs-uuid --set cad48cc6d48cb5eb drivemap -s (hd0) ${root} chainloader +1 } menuentry "Fedora (2.6.35.14-96.fc14.i686) (on /dev/sdb6)" { insmod ext2 set root='(hd1,6)' search --no-floppy --fs-uuid --set 6aee34cf-f77a-489a-9361-85d07194b84b linux /boot/vmlinuz-2.6.35.14-96.fc14.i686 ro root=UUID=6aee34cf-f77a-489a-9361-85d07194b84b rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rhgb quiet initrd /boot/initramfs-2.6.35.14-96.fc14.i686.img } menuentry "Fedora (2.6.35.6-45.fc14.i686) (on /dev/sdb6)" { insmod ext2 set root='(hd1,6)' search --no-floppy --fs-uuid --set 6aee34cf-f77a-489a-9361-85d07194b84b linux /boot/vmlinuz-2.6.35.6-45.fc14.i686 ro root=UUID=6aee34cf-f77a-489a-9361-85d07194b84b rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rhgb quiet initrd /boot/initramfs-2.6.35.6-45.fc14.i686.img } ### END /etc/grub.d/30_os-prober ### grub.conf from fedora default=0 timeout=5 splashimage=(hd0,5)/boot/grub/splash.xpm.gz hiddenmenu title Fedora (2.6.35.14-96.fc14.i686) root (hd0,5) kernel /boot/vmlinuz-2.6.35.14-96.fc14.i686 ro root=UUID=6aee34cf-f77a-489a-9361-85d07194b84b rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rhgb quiet initrd /boot/initramfs-2.6.35.14-96.fc14.i686.img title Fedora (2.6.35.6-45.fc14.i686) root (hd0,5) kernel /boot/vmlinuz-2.6.35.6-45.fc14.i686 ro root=UUID=6aee34cf-f77a-489a-9361-85d07194b84b rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rhgb quiet initrd /boot/initramfs-2.6.35.6-45.fc14.i686.img title Other rootnoverify (hd0,0) chainloader +1

    Read the article

  • Enable grub boot menu on new system

    - by Remus Rigo
    I have installed Ubuntu 11.04 and I would like to see the boot menu when the system starts (by default it is hidden or the timeout=0) # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 44f311b4-0b40-4d10-b004-78108539fc39 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm fi terminal_output gfxterm insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 44f311b4-0b40-4d10-b004-78108539fc39 set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 44f311b4-0b40-4d10-b004-78108539fc39 insmod jpeg if background_image /boot/grub/boot.jpg; then true else set menu_color_normal=white/black set menu_color_highlight=black/light-gray fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### if [ ${recordfail} != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "$linux_gfx_mode" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 2.6.38-8-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 44f311b4-0b40-4d10-b004-78108539fc39 linux /boot/vmlinuz-2.6.38-8-generic root=UUID=44f311b4-0b40-4d10-b004-78108539fc39 ro quiet splash vt.handoff=7 initrd /boot/initrd.img-2.6.38-8-generic } menuentry 'Ubuntu, with Linux 2.6.38-8-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 44f311b4-0b40-4d10-b004-78108539fc39 echo 'Loading Linux 2.6.38-8-generic ...' linux /boot/vmlinuz-2.6.38-8-generic root=UUID=44f311b4-0b40-4d10-b004-78108539fc39 ro single echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.38-8-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 44f311b4-0b40-4d10-b004-78108539fc39 linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 44f311b4-0b40-4d10-b004-78108539fc39 linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### if [ "x${timeout}" != "x-1" ]; then if keystatus; then if keystatus --shift; then set timeout=-1 else set timeout=0 fi else if sleep --interruptible 3 ; then set timeout=0 fi fi fi ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ###

    Read the article

  • Fixing some Visual Studio RC install issues

    - by terje
    The Visual Studio RC has shown some install issues in some cases, particularly for those who upgrades from VS 11 Beta.  I have listed the fixes known now below, and will update if there are more issues.  Note that a repair will not fix the issue, and a Windows restore and subsequent reinstall may not fix it either.  The system seems to remember too much. That was the case for me, at least.  The fixes below however, cures these issues. 1. The Team Explorer Build node doesn’t work You get an error saying System.TypeLoadException like this: To solve this do as follows: 1. Open a command prompt as administrator 2. Go to your program files directory for VS 2012 and down to  the extension folder like:   C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\CommonExtensions\Microsoft\TeamFoundation\Team Explorer 3. Run “gacutil –if Microsoft.TeamFoundation.Build.Controls.dll     2. The SQL Editor gives loading error When you start up VS 2012 RC you get a loading error message.  The same happens if you try to go from the menu to  SQL/Transact-SQL Editor/New Query.    To solve this do as follows: 1. Open Control Panel/Programs and Features 2. Locate the “Microsoft SQL Server 2012 Data-Tier App Framework     (Note , you might find up to 4 such instances) The ones with version numbers ending in 55 is from the SQL 2012 RC, the ones ending in 60 is from the SQL 2012 RTM.  There are two of each, one for x32 and one for x64.  Which is which no one knows. 3. Right click each of them, and select Repair. (It would be nice if someone with this issue tries only the latest RTM ones, and see if that clears the error, and comment back to this post. I am out of non-functioning VS’s )   3.  Errors referring to some extension You get errors referring to some extension that can’t be loaded, or can’t be found.  Check the activity log (see below), and verify there.  If you see yellow collision warnings there, the fix here should solve those too. To solve these:    1. Open a Visual Studio 2012 command prompt 2.  Run:   devenv /resetsettings     How to check for errors using the log Do as follows to get to the activity log for Visual studio 2012 RC 1. Open a Visual Studio 2012 command prompt 2. Run:   devenv /log This starts up Visual Studio.  3. Go to %appdata%/Microsoft\VisualStudio\11.0 4. Double click the file named ActivityLog.xml.  It will start up in your browser, and be formatted using the xslt in the same directory. 5.  Look for items marked in red.  Example for Issue 1 :

    Read the article

  • Generate texture for a heightmap

    - by James
    I've recently been trying to blend multiple textures based on the height at different points in a heightmap. However i've been getting poor results. I decided to backtrack and just attempt to recreate one single texture from an SDL_Surface (i'm using SDL) and just send that into opengl. I'll put my code for creating the texture and reading the colour values. It is a 24bit TGA i'm loading, and i've confirmed that the rest of my code works because i was able to send the surfaces pixels directly to my createTextureFromData function and it drew fine. struct RGBColour { RGBColour() : r(0), g(0), b(0) {} RGBColour(unsigned char red, unsigned char green, unsigned char blue) : r(red), g(green), b(blue) {} unsigned char r; unsigned char g; unsigned char b; }; // main loading code SDLSurfaceReader* reader = new SDLSurfaceReader(m_renderer); reader->readSurface("images/grass.tga"); // new texture unsigned char* newTexture = new unsigned char[reader->m_surface->w * reader->m_surface->h * 3 * reader->m_surface->w]; for (int y = 0; y < reader->m_surface->h; y++) { for (int x = 0; x < reader->m_surface->w; x += 3) { int index = (y * reader->m_surface->w) + x; RGBColour colour = reader->getColourAt(x, y); newTexture[index] = colour.r; newTexture[index + 1] = colour.g; newTexture[index + 2] = colour.b; } } unsigned int id = m_renderer->createTextureFromData(newTexture, reader->m_surface->w, reader->m_surface->h, RGB); // functions for reading pixels RGBColour SDLSurfaceReader::getColourAt(int x, int y) { Uint32 pixel; Uint8 red, green, blue; RGBColour rgb; pixel = getPixel(m_surface, x, y); SDL_LockSurface(m_surface); SDL_GetRGB(pixel, m_surface->format, &red, &green, &blue); SDL_UnlockSurface(m_surface); rgb.r = red; rgb.b = blue; rgb.g = green; return rgb; } // this function taken from SDL documentation // http://www.libsdl.org/cgi/docwiki.cgi/Introduction_to_SDL_Video#getpixel Uint32 SDLSurfaceReader::getPixel(SDL_Surface* surface, int x, int y) { int bpp = m_surface->format->BytesPerPixel; Uint8 *p = (Uint8*)m_surface->pixels + y * m_surface->pitch + x * bpp; switch (bpp) { case 1: return *p; case 2: return *(Uint16*)p; case 3: if (SDL_BYTEORDER == SDL_BIG_ENDIAN) return p[0] << 16 | p[1] << 8 | p[2]; else return p[0] | p[1] << 8 | p[2] << 16; case 4: return *(Uint32*)p; default: return 0; } } I've been stumped at this, and I need help badly! Thanks so much for any advice.

    Read the article

  • Grub2 : Windows 7 can't boot installing with Ubuntu 10.04 on different hard drive

    - by dellphi
    I use a dual boot with two hard disks and two OS is Ubuntu 10.04 and Windows 7. Windows 7 installed on the first disk, first partition. Grub is installed on a second hard disk MBR, and Ubuntu installed on an extended partition on a second hard drive. When I select Windows 7 on the Grub menu, the HDD lamp lights up briefly and then black screen on the monitor, with the status of the keyboard is still functioning. Until now (with the default boot from first HDD), I have to press F12 to get into the Grub to run Linux on a second HDD. ================ fdisk -l ================================ dellph1@dellph1-desktop:~$ fdisk -l omitting empty partition (5) Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00087dec Device Boot Start End Blocks Id System /dev/sda1 * 1 23104 185582848+ 7 HPFS/NTFS /dev/sda2 23105 121601 791177122 5 Extended /dev/sda5 36107 74408 307660783+ 7 HPFS/NTFS /dev/sda6 74409 100081 206218341 7 HPFS/NTFS /dev/sda7 100082 121601 172859368+ 7 HPFS/NTFS Disk /dev/sdb: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x6d43dfb2 Device Boot Start End Blocks Id System /dev/sdb1 1 10030 80560066 5 Extended /dev/sdb5 * 1 5560 44657601 83 Linux /dev/sdb6 5560 9387 30736384 83 Linux /dev/sdb7 9387 10030 5164032 82 Linux swap / Solaris dellph1@dellph1-desktop:~$ ================= grub.cfg ================== # DO NOT EDIT THIS FILE # It is automatically generated by /usr/sbin/grub-mkconfig using templates from /etc/grub.d and settings from /etc/default/grub # BEGIN /etc/grub.d/00_header if [ -s $prefix/grubenv ]; then load_env fi set default="0" if [ ${prev_saved_entry} ]; then set saved_entry=${prev_saved_entry} save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z ${boot_once} ]; then saved_entry=${chosen} save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n ${have_grubenv} ]; then if [ -z ${boot_once} ]; then save_env recordfail; fi; fi } insmod ext2 set root='(hd1,5)' search --no-floppy --fs-uuid --set 2f014a3a-35f3-4d05-87aa-34ca677160b7 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=1024x768 insmod gfxterm insmod vbe if terminal_output gfxterm ; then true ; else # For backward compatibility with versions of terminal.mod that don't # understand terminal_output terminal gfxterm fi fi insmod ext2 set root='(hd1,5)' search --no-floppy --fs-uuid --set 2f014a3a-35f3-4d05-87aa-34ca677160b7 set locale_dir=($root)/boot/grub/locale set lang=en insmod gettext if [ ${recordfail} = 1 ]; then set timeout=-1 else set timeout=5 fi END /etc/grub.d/00_header BEGIN /etc/grub.d/05_debian_theme insmod ext2 set root='(hd1,5)' search --no-floppy --fs-uuid --set 2f014a3a-35f3-4d05-87aa-34ca677160b7 insmod jpeg if background_image /usr/share/backgrounds/CurlsbyCandy.jpg ; then set color_normal=white/black set color_highlight=black/light-gray else set menu_color_normal=white/black set menu_color_highlight=black/light-gray fi END /etc/grub.d/05_debian_theme BEGIN /etc/grub.d/10_linux menuentry 'Ubuntu, with Linux 2.6.32-24-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod ext2 set root='(hd1,5)' search --no-floppy --fs-uuid --set 2f014a3a-35f3-4d05-87aa-34ca677160b7 linux /boot/vmlinuz-2.6.32-24-generic root=UUID=2f014a3a-35f3-4d05-87aa-34ca677160b7 ro splash vga=795 quiet splash nomodeset video=uvesafb:mode_option=1280x1024-24,mtrr=3,scroll=ywrap initrd /boot/initrd.img-2.6.32-24-generic } menuentry 'Ubuntu, with Linux 2.6.32-24-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod ext2 set root='(hd1,5)' search --no-floppy --fs-uuid --set 2f014a3a-35f3-4d05-87aa-34ca677160b7 echo 'Loading Linux 2.6.32-24-generic ...' linux /boot/vmlinuz-2.6.32-24-generic root=UUID=2f014a3a-35f3-4d05-87aa-34ca677160b7 ro single splash vga=795 echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.32-24-generic } END /etc/grub.d/10_linux BEGIN /etc/grub.d/30_os-prober menuentry "Windows 7 (loader) (on /dev/sda1)" { insmod ntfs set root='(hd0,1)' search --no-floppy --fs-uuid --set 5cac2139ac210f58 chainloader +1 } END /etc/grub.d/30_os-prober BEGIN /etc/grub.d/40_multisystem Ajout de MultiSystem MULTISYSTEM MENU menuentry "PLoP Boot Manager" { linux16 /boot/plpbt } menuentry "Smart Boot Manager" { search --set -f /boot/sbootmgr.dsk linux16 /boot/memdisk initrd16 /boot/sbootmgr.dsk } FIN MULTISYSTEM MENU END /etc/grub.d/40_multisystem ================================================ I want to keep the Grub on the second HDD. I have been using the Startup Manager, Boot Manager and Grub Customizer, and this problem still unsolved. The easiest thing that I can possibly do is to install Grub on first HDD, but I was curious and maybe someone can help.

    Read the article

  • Texture displays on Android emulator but not on device

    - by Rob
    I have written a simple UI which takes an image (256x256) and maps it to a rectangle. This works perfectly on the emulator however on the phone the texture does not show, I see only a white rectangle. This is my code: public void onSurfaceCreated(GL10 gl, EGLConfig config) { byteBuffer = ByteBuffer.allocateDirect(shape.length * 4); byteBuffer.order(ByteOrder.nativeOrder()); vertexBuffer = byteBuffer.asFloatBuffer(); vertexBuffer.put(cardshape); vertexBuffer.position(0); byteBuffer = ByteBuffer.allocateDirect(shape.length * 4); byteBuffer.order(ByteOrder.nativeOrder()); textureBuffer = byteBuffer.asFloatBuffer(); textureBuffer.put(textureshape); textureBuffer.position(0); // Set the background color to black ( rgba ). gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f); // Enable Smooth Shading, default not really needed. gl.glShadeModel(GL10.GL_SMOOTH); // Depth buffer setup. gl.glClearDepthf(1.0f); // Enables depth testing. gl.glEnable(GL10.GL_DEPTH_TEST); // The type of depth testing to do. gl.glDepthFunc(GL10.GL_LEQUAL); // Really nice perspective calculations. gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); gl.glEnable(GL10.GL_TEXTURE_2D); loadGLTexture(gl); } public void onDrawFrame(GL10 gl) { gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glDisable(GL10.GL_DEPTH_TEST); gl.glMatrixMode(GL10.GL_PROJECTION); // Select Projection gl.glPushMatrix(); // Push The Matrix gl.glLoadIdentity(); // Reset The Matrix gl.glOrthof(0f, 480f, 0f, 800f, -1f, 1f); gl.glMatrixMode(GL10.GL_MODELVIEW); // Select Modelview Matrix gl.glPushMatrix(); // Push The Matrix gl.glLoadIdentity(); // Reset The Matrix gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glLoadIdentity(); gl.glTranslatef(card.x, card.y, 0.0f); gl.glBindTexture(GL10.GL_TEXTURE_2D, texture[0]); //activates texture to be used now gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertexBuffer); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer); gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); } public void onSurfaceChanged(GL10 gl, int width, int height) { // Sets the current view port to the new size. gl.glViewport(0, 0, width, height); // Select the projection matrix gl.glMatrixMode(GL10.GL_PROJECTION); // Reset the projection matrix gl.glLoadIdentity(); // Calculate the aspect ratio of the window GLU.gluPerspective(gl, 45.0f, (float) width / (float) height, 0.1f, 100.0f); // Select the modelview matrix gl.glMatrixMode(GL10.GL_MODELVIEW); // Reset the modelview matrix gl.glLoadIdentity(); } public int[] texture = new int[1]; public void loadGLTexture(GL10 gl) { // loading texture Bitmap bitmap; bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.image); // generate one texture pointer gl.glGenTextures(0, texture, 0); //adds texture id to texture array // ...and bind it to our array gl.glBindTexture(GL10.GL_TEXTURE_2D, texture[0]); //activates texture to be used now // create nearest filtered texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); // Use Android GLUtils to specify a two-dimensional texture image from our bitmap GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); // Clean up bitmap.recycle(); } As per many other similar issues and resolutions on the web i have tried setting the minsdkversion is 3, loading the bitmap via an input stream bitmap = BitmapFactory.decodeStream(is), setting BitmapFactory.Options.inScaled to false, putting the images in the nodpi folder and putting them in the raw folder.. all of which didn't help. I'm not really sure what else to try..

    Read the article

  • Single use download script - Modification [on hold]

    - by Iulius
    I have this Single use download script! This contain 3 php files: page.php , generate.php and variables.php. Page.php Code: <?php include("variables.php"); $key = trim($_SERVER['QUERY_STRING']); $keys = file('keys/keys'); $match = false; foreach($keys as &$one) { if(rtrim($one)==$key) { $match = true; $one = ''; } } file_put_contents('keys/keys',$keys); if($match !== false) { $contenttype = CONTENT_TYPE; $filename = SUGGESTED_FILENAME; readfile(PROTECTED_DOWNLOAD); exit; } else { ?> <html> <head> <meta http-equiv="refresh" content="1; url=http://docs.google.com/"> <title>Loading, please wait ...</title> </head> <body> Loading, please wait ... </body> </html> <?php } ?> Generate.php Code: <?php include("variables.php"); $password = trim($_SERVER['QUERY_STRING']); if($password == ADMIN_PASSWORD) { $new = uniqid('key',TRUE); if(!is_dir('keys')) { mkdir('keys'); $file = fopen('keys/.htaccess','w'); fwrite($file,"Order allow,deny\nDeny from all"); fclose($file); } $file = fopen('keys/keys','a'); fwrite($file,"{$new}\n"); fclose($file); ?> <html> <head> <title>Page created</title> <style> nl { font-family: monospace } </style> </head> <body> <h1>Page key created</h1> Your new single-use page link:<br> <nl> <?php echo "http://" . $_SERVER['HTTP_HOST'] . DOWNLOAD_PATH . "?" . $new; ?></nl> </body> </html> <?php } else { header("HTTP/1.0 404 Not Found"); } ?> And the last one Variables.php Code: <? define('PROTECTED_DOWNLOAD','download.php'); define('DOWNLOAD_PATH','/.work/page.php'); define('SUGGESTED_FILENAME','download-doc.php'); define('ADMIN_PASSWORD','1234'); define('EXPIRATION_DATE', '+36 hours'); header("Cache-Control: no-cache, must-revalidate"); header("Expires: ".date('U', strtotime(EXPIRATION_DATE))); ?> The http://www.site.com/generate.php?1234 will generate a unique link like page.php?key1234567890. This link page.php?key1234567890 will be sent by email to my user. Now how can I generate a link like this page.php?key1234567890&[email protected] ? So I think I must access the generator page like this generate.php?1234&[email protected] . P.S. This variable will be posted on the download page by "Hello, " I tried everthing to complete this, and no luck. Thanks in advance for help.

    Read the article

  • Wireless device bug on 13.10. BCM4313 registers as eth1 instead of wlan0 and no internet access

    - by user205691
    My Hotel wiFi requires me to login with a username & password after connecting to the hotspot. So, my browser would open a page with username & passwrd fields to login and then connect to internet. But unfortunately, firefox & chromium dont seem to work. i dont think it is browser related but a setting for the wifi router or driver which is creating this issue. using Broadcom 801.11 STA wireless driver (proprietary). tried open source as well but same result !! The image linked below shows my wifi connection setting & Chromium. The login page itself comes up after a long time and after entering the credentials, it keeps loading for ever !! it is the same case for every other browser.. so i dont think its browser issue but something to do with wifi setting or network manager stuff.. interestingly, i am able to connect to WiFi networks with WPA key without any issue. Adhoc hotspot is a problem and that is my regular home network :( .. I hope i can get some help solving this issue ! I have tried repeating the same hotspot after login from my android, by creating a virtual repeater with WPA key and it works. I can browse on ubuntu using this method.. but cant be doing this regularly ! I tried loading the same login page of the hotel wifi while browsing through my repeater wifi created on mobile and screen shot attached below. the page loads up quick and easy.. so this means something is wrong with the way network manager handles adhoc connectivity & login ?? i installed wicd0 but it crashes on startup and not helpful at all ! Screenshot of Chromium page Login page with repeated hotspot ifconfig in my terminal results: krishna@krishna-HP-ENVY-4-Notebook-PC:~$ ifconfig eth0 Link encap:Ethernet HWaddr 28:92:4a:1d:54:fa UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth1 Link encap:Ethernet HWaddr e0:06:e6:89:fa:49 inet addr:10.24.1.71 Bcast:10.24.1.255 Mask:255.255.255.0 inet6 addr: fe80::e206:e6ff:fe89:fa49/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10940 errors:0 dropped:0 overruns:0 frame:348431 TX packets:6611 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7669631 (7.6 MB) TX bytes:864195 (864.1 KB) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:2146 errors:0 dropped:0 overruns:0 frame:0 TX packets:2146 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:166120 (166.1 KB) TX bytes:166120 (166.1 KB) I wonder why is the wireless configured under eth1 ? I think this is a bug with earlier ubuntu versions, but is this normal in 13.10 or is there a wrong configuration here ? The wireless device in my pc is BCM4313 and i have installed the bcmwl-kernel-sources, wireless-tools to support the device. i also reinstalled the bcmwl-kernel as suggested on broadcom website, via synaptic package manager. Nothing has changed this situation ! I tried booting into liveUSB and then ifconfig results show wireless under wlan0. But then the wireless connects and loads the login page. So is the problem with the device configuration now ? i really want to get this fixed before i start configuring the other stuff like ATI graphics and such on the laptop for overheating.. lack of internet access is too bad a bug for me :P any help is appreciated!

    Read the article

  • Modernizr Rocks HTML5

    - by Laila
    HTML5 is a moving target.  At the moment, we don't know what will be in future versions.  In most circumstances, this really matters to the developer. When you're using Adobe Air, you can be reasonably sure what works, what is there, and what isn't, since you have a version of the browser built-in. With Metro, you can assume that you're going to be using at least IE 10.   If, however,  you are using HTML5 in a web application, then you are going to rely heavily on Feature Detection.  Feature-Detection is a collection of techniques that tell you, via JavaScript, whether the current browser has this feature natively implemented or not Feature Detection isn't just there for the esoteric stuff such as  Geo-location,  progress bars,  <canvas> support,  the new <input> types, Audio, Video, web workers or storage, but is required even for semantic markup, since old browsers make a pigs ear out of rendering this.  Feature detection can't rely just on reading the browser version and inferring from that what works. Instead, you must use JavaScript to check that an HTML5 feature is there before using it.  The problem with relying on the user-agent is that it takes a lot of historical data  to work out what version does what, and, anyway, the user-agent can be, and sometimes is, spoofed. The open-source library Modernizr  is just about the most essential  JavaScript library for anyone using HTML5, because it provides APIs to test for most of the CSS3 and HTML5 features before you use them, and is intelligent enough to alter semantic markup into 'legacy' 'markup  using shims  on page-load  for old browsers. It also allows you to check what video Codecs are installed for playing video. It also provides media queries  and conditional resource-loading (formerly YepNope.js.).  Generally, Modernizr gives you the choice of what you do about browsers that don't support the feature that you want. Often, the best choice is graceful degradation, but the resource-loading feature allows you to dynamically load JavaScript Shims to replace the standard API for missing or defective HTML5 functionality, called 'PolyFills'.  As the Modernizr site says 'Yes, not only can you use HTML5 today, but you can use it in the past, too!' The evolutionary progress of HTML5  requires a more defensive style of JavaScript programming where the programmer adopts a mindset of fearing the worst ( IE 6)  rather than assuming the best, whilst exploiting as many of the new HTML features as possible for the requirements of the site or HTML application.  Why would anyone want the distraction of developing their own techniques to do this when  Modernizr exists to do this for you? Laila

    Read the article

  • Web Service Example - Part 3: Asynchronous

    - by Denis T
    In this edition of the ADF Mobile blog we'll tackle part 3 of our Web Service examples.  In this posting we'll take a look at firing the web service asynchronously and then filling in the UI when it completes.  This can be useful when you have data on the device in a local store and want to show that to the user while the application uses lazy loading from a web service to load more data. Getting the sample code: Just click here to download a zip of the entire project.  You can unzip it and load it into JDeveloper and deploy it either to iOS or Android.  Please follow the previous blog posts if you need help getting JDeveloper or ADF Mobile installed.  Note: This is a different workspace than WS-Part2 What's different? In this example, when you click the Search button on the Forecast By Zip option, now it takes you directly to the results page, which is initially blank.  When the web service returns a second or two later the data pops into the UI.  If you go back to the search page and hit Search it will again clear the results and invoke the web service asynchronously.  This isn't really that useful for this particular example but it shows an important technique that can be used for other use cases. How it was done 1)  First we created a new class, ForecastWorker, that implements the Runnable interface.  This is used as our worker class that we create an instance of and pass to a new thread that we create when the Search button is pressed inside the retrieveForecast actionListener handler.  Once the thread is started, the retrieveForecast returns immediately.  2)  The rest of the code that we had previously in the retrieveForecast method has now been moved to the retrieveForecastAsync.  Note that we've also added synchronized specifiers on both these methods so they are protected from re-entrancy. 3)  The run method of the ForecastWorker class then calls the retrieveForecastAsync method.  This executes the web service code that we had previously, but now on a separate thread so the UI is not locked.  If we had already shown data on the screen it would have appeared before this was invoked.  Note that you do not see a loading indicator either because this is on a separate thread and nothing is blocked. 4)  The last but very important aspect of this method is that once we update data in the collections from the data we retrieve from the web service, we call AdfmfJavaUtilities.flushDataChangeEvents().   We need this because as data is updated in the background thread, those data change events are not propagated to the main thread until you explicitly flush them.  As soon as you do this, the UI will get updated if any changes have been queued. Summary of Fundamental Changes In This Application The most fundamental change is that we are invoking and handling our web services in a background thread and updating the UI when the data returns.  This allows an application to provide a better user experience in many cases because data that is already available locally is displayed while lengthy queries or web service calls can be done in the background and the UI updated when they return.  There are many different use cases for background threads and this is just one example of optimizing the user experience and generating a better mobile application. 

    Read the article

  • Fluent NHibernate: mapping complex many-to-many (with additional columns) and setting fetch

    - by HackedByChinese
    I need a Fluent NHibernate mapping that will fulfill the following (if nothing else, I'll also take the appropriate NHibernate XML mapping and reverse engineer it). DETAILS I have a many-to-many relationship between two entities: Parent and Child. That is accomplished by an additional table to store the identities of the Parent and Child. However, I also need to define two additional columns on that mapping that provide more information about the relationship. This is roughly how I've defined my types, at least the relevant parts (where Entity is some base type that provides an Id property and checks for equivalence based on that Id): public class Parent : Entity { public virtual IList<ParentChildRelationship> Children { get; protected set; } public virtual void AddChildRelationship(Child child, int customerId) { var relationship = new ParentChildRelationship { CustomerId = customerId, Parent = this, Child = child }; if (Children == null) Children = new List<ParentChildRelationship>(); if (Children.Contains(relationship)) return; relationship.Sequence = Children.Count; Children.Add(relationship); } } public class Child : Entity { // child doesn't care about its relationships } public class ParentChildRelationship { public int CustomerId { get; set; } public Parent Parent { get; set; } public Child Child { get; set; } public int Sequence { get; set; } public override bool Equals(object obj) { if (ReferenceEquals(null, obj)) return false; if (ReferenceEquals(this, obj)) return true; var other = obj as ParentChildRelationship; if (return other == null) return false; return (CustomerId == other.CustomerId && Parent == other.Parent && Child == other.Child); } public override int GetHashCode() { unchecked { int result = CustomerId; result = Parent == null ? 0 : (result*397) ^ Parent.GetHashCode(); result = Child == null ? 0 : (result*397) ^ Child.GetHashCode(); return result; } } } The tables in the database look approximately like (assume primary/foreign keys and forgive syntax): create table Parent ( id int identity(1,1) not null ) create table Child ( id int identity(1,1) not null ) create table ParentChildRelationship ( customerId int not null, parent_id int not null, child_id int not null, sequence int not null ) I'm OK with Parent.Children being a lazy loaded property. However, the ParentChildRelationship should eager load ParentChildRelationship.Child. Furthermore, I want to use a Join when I eager load. The SQL, when accessing Parent.Children, NHibernate should generate an equivalent query to: SELECT * FROM ParentChildRelationship rel LEFT OUTER JOIN Child ch ON rel.child_id = ch.id WHERE parent_id = ? OK, so to do that I have mappings that look like this: ParentMap : ClassMap<Parent> { public ParentMap() { Table("Parent"); Id(c => c.Id).GeneratedBy.Identity(); HasMany(c => c.Children).KeyColumn("parent_id"); } } ChildMap : ClassMap<Child> { public ChildMap() { Table("Child"); Id(c => c.Id).GeneratedBy.Identity(); } } ParentChildRelationshipMap : ClassMap<ParentChildRelationship> { public ParentChildRelationshipMap() { Table("ParentChildRelationship"); CompositeId() .KeyProperty(c => c.CustomerId, "customerId") .KeyReference(c => c.Parent, "parent_id") .KeyReference(c => c.Child, "child_id"); Map(c => c.Sequence).Not.Nullable(); } } So, in my test if i try to get myParentRepo.Get(1).Children, it does in fact get me all the relationships and, as I access them from the relationship, the Child objects (for example, I can grab them all by doing parent.Children.Select(r => r.Child).ToList()). However, the SQL that NHibernate is generating is inefficient. When I access parent.Children, NHIbernate does a SELECT * FROM ParentChildRelationship WHERE parent_id = 1 and then a SELECT * FROM Child WHERE id = ? for each child in each relationship. I understand why NHibernate is doing this, but I can't figure out how to set up the mapping to make NHibernate query the way I mentioned above.

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >