Search Results

Search found 210 results on 9 pages for 'undocumented'.

Page 6/9 | < Previous Page | 2 3 4 5 6 7 8 9  | Next Page >

  • How SqlDataAdapter works internally?

    - by tigrou
    I wonder how SqlDataAdapter works internally, especially when using UpdateCommand for updating a huge DataTable (since it's usually a lot faster that just sending sql statements from a loop). Here is some idea I have in mind : It creates a prepared sql statement (using SqlCommand.Prepare()) with CommandText filled and sql parameters initialized with correct sql types. Then, it loops on datarows that need to be updated, and for each record, it updates parameters values, and call SqlCommand.ExecuteNonQuery(). It creates a bunch of SqlCommand objects with everything filled inside (CommandText and sql parameters). Several SqlCommands at once are then batched to the server (depending of UpdateBatchSize). It uses some special, low level or undocumented sql driver instructions that allow to perform an update on several rows in a effecient way (rows to update would need to be provided using a special data format and a the same sql query (UpdateCommand here) would be executed against each of these rows).

    Read the article

  • Determine if FieldInfo is compiler generated backingfield

    - by Steffen
    The title pretty much says it all, how do I know if I'm getting a compiler generated backingfield for a {get; set;} property ? I'm running this code to get my FieldInfos: Class MyType { private int foo; public int bar {get; private set; } } Type type = TypeOf(MyType); foreach (FieldInfo fi in type.GetFields(BindingFlags.Instance | BindingFlags.Public | BindingFlags.DeclaredOnly | BindingFlags.NonPublic)) { // Gets both foo and bar, however bar is called <bar>k__backingfield. } so the question is, can I somehow detect that the FieldInfo is a backingfield, without relying on checking its name ? (Which is pretty undocumented, and could be broken in next version of the framework)

    Read the article

  • Friday Fun: Play 3D Rally Racing in Google Chrome

    - by Asian Angel
    Are you a racing fan in need of a short (or long) break from work? Then get ready to enjoy a mid-day speed boost with the 3D Rally Racing extension for Google Chrome. 3D Rally Racing in Action This is the opening screen for 3D Rally Racing. You can start game play, view current best times, and read through the instructions from here. The first thing that you should do is have a quick look at the instructions to help you get set up and started. Click on “Play” to start the process. Before you can go further you will need to choose a “User Name”. Once you have done that click “Select Track”… Note: The extension will retain your name for later use even if you close your browser. When you first start out you will only have access to two tracks…the others require reaching a certain score/level to unlock them. Once you select a track you will be taken to the next screen. After you have selected a track you will need to choose your car and car color. All that is left to do afterwards is click on “Go Race”. Note: You will be competing against three other vehicles in the race. Here is a look at the “Desert Race Track”… And a look at the “Snow Race Track”. This game moves quickly and it is easy to fall behind if you are not careful! You can have a lot of fun playing this game while you are waiting for the day to end. Conclusion If you love racing games and want a fun way to waste the rest of afternoon at work, then you should definitely give 3D Rally Racing a try. Links Download the 3d Rally Racing extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Friday Fun: Uphill RushFriday Fun: Racing Fun with SuperTuxKart RacerHow to Make Google Chrome Your Default BrowserEnable Vista Black Style Theme for Google Chrome in XPIncrease Google Chrome’s Omnibox Popup Suggestion Count With an Undocumented Switch TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Friday Fun: Play 3D Rally Racing in Google Chrome

    - by Asian Angel
    Are you a racing fan in need of a short (or long) break from work? Then get ready to enjoy a mid-day speed boost with the 3D Rally Racing extension for Google Chrome. 3D Rally Racing in Action This is the opening screen for 3D Rally Racing. You can start game play, view current best times, and read through the instructions from here. The first thing that you should do is have a quick look at the instructions to help you get set up and started. Click on “Play” to start the process. Before you can go further you will need to choose a “User Name”. Once you have done that click “Select Track”… Note: The extension will retain your name for later use even if you close your browser. When you first start out you will only have access to two tracks…the others require reaching a certain score/level to unlock them. Once you select a track you will be taken to the next screen. After you have selected a track you will need to choose your car and car color. All that is left to do afterwards is click on “Go Race”. Note: You will be competing against three other vehicles in the race. Here is a look at the “Desert Race Track”… And a look at the “Snow Race Track”. This game moves quickly and it is easy to fall behind if you are not careful! You can have a lot of fun playing this game while you are waiting for the day to end. Conclusion If you love racing games and want a fun way to waste the rest of afternoon at work, then you should definitely give 3D Rally Racing a try. Links Download the 3d Rally Racing extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Friday Fun: Uphill RushFriday Fun: Racing Fun with SuperTuxKart RacerHow to Make Google Chrome Your Default BrowserEnable Vista Black Style Theme for Google Chrome in XPIncrease Google Chrome’s Omnibox Popup Suggestion Count With an Undocumented Switch TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Add SiteAdvisor to Google Chrome

    - by Asian Angel
    With the continued increase in malware knowing when a website is trouble can save you from a painful experience. If you are looking to add a bit more security to your Chromium-based Browser then join us as we look at the SiteAdvisor for Chrome extension. SiteAdvisor for Chrome in Action Once you have installed the extension you should go into the options first. You can choose which style of warning that you would like to receive when encountering a “less then reputable” website. The default setting is for the “Toolbar Icon Warning” but can be easily changed to a full “Webpage Redirect”. Note: The “Toolbar Button/Icon” does not display a drop-down window when clicked on. Here is an example if you go with the default and receive the “Toolbar Icon Warning”. Once again the same website except with the full “Webpage Redirect” in effect…of the two options this is the recommended setting. Notice that details are provided for “why” the website is listed as “less than reputable”. An example of a website that is all good…nothing but checkmarks and green. Terrific! There may be those of you who would be more comfortable with a “double layer” of protection while browsing. As you can see here SiteAdvisor and WOT work nicely together. You can read more about WOT for Chrome here. Conclusion If you worry about “less than reputable” websites SiteAdvisor for Chrome can help provide a layer of security that will warn you when you are getting ready to “browse” into possible trouble. Links Download the SiteAdvisor for Chrome extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Find a Website’s Actual Location with Chrome FlagsHow to Make Google Chrome Your Default BrowserEnable Vista Black Style Theme for Google Chrome in XPIncrease Google Chrome’s Omnibox Popup Suggestion Count With an Undocumented SwitchDisable YouTube Comments while using Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Geek Parents – Did you try Parental Controls in Windows 7? Change DNS servers on the fly with DNS Jumper Live PDF Searches PDF Files and Ebooks Converting Mp4 to Mp3 Easily Use Quick Translator to Translate Text in 50 Languages (Firefox) Get Better Windows Search With UltraSearch

    Read the article

  • Challenges in multi-player Android Game Server with RESTful Nature

    - by Kush
    I'm working on an Android Game based on Contract Bridge, as a part of my college Summer Internship project. The game will be multi-player such that 4 Android devices can play it, so there's no BOT or CPU player to be developed. At the time of getting project, I realized that most of the students had already worked on the project but none of their works is reusable now (for variety of reasons like, undocumented code and design architecture, different platform implementation). I have experience working on several open source projects and hence I emphasis to work out on this project such that components I make become reusable as much as possible. Now, as the game is multi-player and entire game progress will be handled on server, I'm currently working on Server's design, since I wanted to make game server reusable such that any client platform can use it, I was previously confused in selecting Socket or REST for Game Server's design, but later finalized to work on REST APIs for the server. Now, since I have to keep all players in-sync while they make movements in game, on server I've planned to use Database which will keep all players' progress, specific for each table (in Bridge, 4 players play on single table, and server will handle many such game tables). I don't know if its an appropriate decision to use database as shared medium to track progress of each game table (let me know if there's an appropriate or better option). Obviously, when game is completed for the table, data for that table on server's database is discarded. Now the problem is that, access to REST service is an HTTP call, so as long as client doesn't make any request, server will remain idle, and consider a situation where A player has played a card on his device and the device requests to apply this change on the server. Now, I need to let rest of the three devices know that the player has played a card, and also update view on their device. AFAIK, REST cannot provide a sort-of Push-notification system, since the connection to the server is not persistent. One solution that I thought was to make each device constantly poll the server for any change (like every 56 ms) and when changes are found, reflect it on the device. But I feel this is not an elegant way, as every HTTP request is expensive. (and I choose REST to make game play experience robust since, a mobile device tends to get disconnected from Internet, and if there's Socket-like persistent connection then entire game progress is subject to lost. Also, portability on client-end is important) Also, imagining a situation where 10 game tables are in progress and 40 players are playing, a server must be capable to handle flooded HTTP requests from all the devices which make it every 56 ms. So I wonder if the situation is assumed as DoS attack. So, explaining the situation, am I going on the right track for the server design? I wanted to be sure before I proceed much further with the code.

    Read the article

  • DBCC MEMUSAGE in 2005/8 ?

    - by steveh99999
    I used to like using undocumented command DBCC MEMUSAGE in SQL 2000 to see which tables were using space in SQL data cache. In SQL 2005, this command is not longer present. Instead a DMV – sys.dm_os_buffer_descriptors – can be used to display data cache contents,  but this doesn’t quite give you the same output as DBCC MEMUSAGE. I’m also aware that you can use Quest’s spotlight tool to view a summary of data cache contents. Using  this post by Umachandar Jayachandran  of Microsoft, I was able to create the following equivalent for SQL 2005/8. I’ve wrapped Umachandar’s original query in a CTE to produce summary information :- ;WITH memusage_CTE AS (SELECT bd.database_id, bd.file_id, bd.page_id, bd.page_type , COALESCE(p1.object_id, p2.object_id) AS object_id , COALESCE(p1.index_id, p2.index_id) AS index_id , bd.row_count, bd.free_space_in_bytes, CONVERT(TINYINT,bd.is_modified) AS 'DirtyPage' FROM sys.dm_os_buffer_descriptors AS bd JOIN sys.allocation_units AS au ON au.allocation_unit_id = bd.allocation_unit_id OUTER APPLY ( SELECT TOP(1) p.object_id, p.index_id FROM sys.partitions AS p WHERE p.hobt_id = au.container_id AND au.type IN (1, 3) ) AS p1 OUTER APPLY ( SELECT TOP(1) p.object_id, p.index_id FROM sys.partitions AS p WHERE p.partition_id = au.container_id AND au.type = 2 ) AS p2 WHERE  bd.database_id = DB_ID() AND bd.page_type IN ('DATA_PAGE', 'INDEX_PAGE') ) SELECT TOP 20 DB_NAME(database_id) AS 'Database',OBJECT_NAME(object_id,database_id) AS 'Table Name', index_id,COUNT(*) AS 'Pages in Cache', SUM(dirtyPage) AS 'Dirty Pages' FROM memusage_CTE GROUP BY database_id, object_id, index_id ORDER BY COUNT(*) DESC I’m not 100% happy with the results of the above query however… I’ve noticed that on a busy BizTalk messageBox database  it will return information on pages that contain GHOST rows – . ie where data has already been deleted but has yet to be cleaned-up by a background process – I’m need to investigate further why cache on this server apparently contains so much GHOST data… For more information on the background ghost cleanup process, see this article by Paul Randall. However, I think the results of this query should still be of interest to a DBA. I have another post to come shortly regarding an example I encountered where this information proved useful to me… I notice in SQL 2008, sys.dm_os_buffer_descriptors gained an extra column – numa_mode – I’m interested to see how this is populated and how useful this column can be on a NUMA-enabled system. I’m assuming in theory you could use this column to help analyse how your tables are spread across Numa-enabled data-cache ?

    Read the article

  • SQL Prompt Easter Egg

    - by Johnm
    Having Red Gate's SQL Prompt installed with SQL Server Management Studio has saved me many headaches over the years of its use. It is extremely nice to type in a table name and see not only the column names, but also their data types and identification of primary keys. Another cool feature is the built-in short cut scripts that are included toward the bottom of the suggestion box. An example of these short cut scripts would be to type in the letters  cv and then hit enter and the following template for CREATE VIEW will appear: CREATE VIEW --WITH ENCRYPTION, SCHEMABINDING, VIEW_METADATA AS     SELECT /* query specification */ -- WITH CHECK OPTION GO These scripts are great, and on occasion rather humorous. Recently, I was writing an UPDATE statement that would update a derived and aliased set of data in . An example of such a statement is as follows: UPDATE y SET a.[FieldA] = b.[FieldB] FROM     (         SELECT             a.[FieldA]             ,b.[FieldB]         FROM             [MyTableA] a             INNER JOIN [MyTableB] b                 ON a.[PKA] = b.[PKB]     ) y; Upon typing the UPDATE y portion I hit enter and the expression "A A A A R G H !" appeared resulting in an unexpected burst of laughter. With a dash of curiosity and a pinch of research I discovered that at the bottom of the SQL Prompt suggestion box resides a short cut script called "yell", which is described as "Vent your frustration". Another humorous short cut script is "neo", which is described as "-- I know Kung-Fu". All is required for these to activate is to type the first letter and hit enter. I wonder if there are any undocumented ones?

    Read the article

  • Handy SQL Server Function Series: Part 1

    - by Most Valuable Yak (Rob Volk)
    I've been preparing to give a presentation on SQL Server for a while now, and a topic that was recommended was SQL Server functions.  More specifically, the lesser-known functions (like @@OPTIONS), and maybe some interesting ways to use well-known functions (like using PARSENAME to split IP addresses)  I think this is a veritable goldmine of useful information, and researching for the presentation has confirmed that beyond my initial expectations.I even found a few undocumented/underdocumented functions, so for the first official article in this series I thought I'd start with 2 of each, COLLATIONPROPERTY() and COLLATIONPROPERTYFROMID().COLLATIONPROPERTY() provides information about (wait for it) collations, SQL Server's method for handling foreign character sets, sort orders, and case- or accent-sensitivity when sorting character data.  The Books Online entry for  COLLATIONPROPERTY() lists 4 options for code page, locale ID, comparison style and version.  Used in conjunction with fn_helpcollations():SELECT *, COLLATIONPROPERTY(name,'LCID') LCID, COLLATIONPROPERTY(name,'CodePage') CodePage, COLLATIONPROPERTY(name,'ComparisonStyle') ComparisonStyle, COLLATIONPROPERTY(name,'Version') Version FROM fn_helpcollations()You can get some excellent information. (c'mon, be honest, did you even know about fn_helpcollations?)Collations in SQL Server have a unique name and ID, and you'll see one or both in various system tables or views like syscolumns, sys.columns, and INFORMATION_SCHEMA.COLUMNS.  Unfortunately they only link the ID and name for collations of existing columns, so if you wanted to know the collation ID of Albanian_CI_AI_WS, you'd have to declare a column with that collation and query the system table.While poking around the OBJECT_DEFINITION() of sys.columns I found a reference to COLLATIONPROPERTYFROMID(), and the unknown property "Name".  Not surprisingly, this is how sys.columns finds the name of the collation, based on the ID stored in the system tables.  (Check yourself if you don't believe me)Somewhat surprisingly, the "Name" property also works for COLLATIONPROPERTY(), although you'd already know the name at that point.  Some wild guesses and tests revealed that "CollationID" is also a valid property for both functions, so now:SELECT *, COLLATIONPROPERTY(name,'LCID') LCID, COLLATIONPROPERTY(name,'CodePage') CodePage, COLLATIONPROPERTY(name,'ComparisonStyle') ComparisonStyle, COLLATIONPROPERTY(name,'Version') Version, COLLATIONPROPERTY(name,'CollationID') CollationID FROM fn_helpcollations() Will get you the collation ID-name link you…probably didn't know or care about, but if you ever get on Jeopardy! and this question comes up, feel free to send some of your winnings my way. :)And last but not least, COLLATIONPROPERTYFROMID() uses the same properties as COLLATIONPROPERTY(), so you can use either one depending on which value you have available.Keep an eye out for Part 2!

    Read the article

  • Getting away from a customized Magento 1.4 installation - Magento 1.6, OpenCart, or others?

    - by Phil
    I'm dealing with a Magento 1.4.0.0 Community Edition installation with various undocumented changes to the core (mostly integration with an ERP system), an outdated Sweet Tooth Points & Rewards module and some custom payment providers. It also doubles as a mediocre blogging/CMS system. It has one store each for 3 different languages, with about 40 product categories for a few hundred products. [rant] With no prior experience with any PHP e-commerce systems, I find it very difficult to work with. I attempted to install Magento 1.4.0.0 on my local WAMP dev machine, it installs fine, but the main page or search do not show any products no matter what I do in the backend admin panel. I don't know what's wrong with it, and whatever information I googled is either too old or too new from Magento 1.4. Later I'm given FTP access to the testing server, which neither my manager or I have permission to install XDebug on, as apparantly it runs on the same server as the production server (yikes). Trying to learn how Magento works is torture. I spent a week trying to add some fields into the Onepage Checkout before giving up and went to work on something else. The template system, just like the rest of Magento, is a bloated mishmash of overcomplicated directory structures, weird config xml files and EAV databases. I went into 6 different models and several content blocks in the backend just to change what the front page looks like. With little-to-none helpful and clear documentation (unlike CodeIgniter) and various breaking changes between minor point revisions which makes it hard to find useful information, Magento 1.4 is a developer killer. [/rant] The client is planning to redesign the site and has decided it might as well as move on from this unsustainable, hacky, upgrade-unfriendly, developer-unfriendly mess. Magento 1.4 is starting to show its age, with Magento 1.7 coming soon, the client is considering upgrading to Magento 1.6 or 1.7 if it has improved from 1.4. The customizations done to the current Magento 1.4 installation will have to be redone, and a new license for the Sweet Tooth Points & Rewards module will have to be bought. The client is also open to other e-commerce systems. I've looked at OpenCart and it seems to be quite developer friendly with a fairly simple structure. I found some complaints regarding its performance when the shop has thousands of categories or products, but this is not an issue with the current number of products my client has. It seems to be solid ground for easy customization to bring the rewards system and ERP integration over. What should the client upgrade to in this case?

    Read the article

  • Adding Column to a SQL Server Table

    - by Dinesh Asanka
    Adding a column to a table is  common task for  DBAs. You can add a column to a table which is a nullable column or which has default values. But are these two operations are similar internally and which method is optimal? Let us start this with an example. I created a database and a table using following script: USE master Go --Drop Database if exists IF EXISTS (SELECT 1 FROM SYS.databases WHERE name = 'AddColumn') DROP DATABASE AddColumn --Create the database CREATE DATABASE AddColumn GO USE AddColumn GO --Drop the table if exists IF EXISTS ( SELECT 1 FROM sys.tables WHERE Name = 'ExistingTable') DROP TABLE ExistingTable GO --Create the table CREATE TABLE ExistingTable (ID BIGINT IDENTITY(1,1) PRIMARY KEY CLUSTERED, DateTime1 DATETIME DEFAULT GETDATE(), DateTime2 DATETIME DEFAULT GETDATE(), DateTime3 DATETIME DEFAULT GETDATE(), DateTime4 DATETIME DEFAULT GETDATE(), Gendar CHAR(1) DEFAULT 'M', STATUS1 CHAR(1) DEFAULT 'Y' ) GO -- Insert 100,000 records with defaults records INSERT INTO ExistingTable DEFAULT VALUES GO 100000 Before adding a Column Before adding a column let us look at some of the details of the database. DBCC IND (AddColumn,ExistingTable,1) By running the above query, you will see 637 pages for the created table. Adding a Column You can add a column to the table with following statement. ALTER TABLE ExistingTable Add NewColumn INT NULL Above will add a column with a null value for the existing records. Alternatively you could add a column with default values. ALTER TABLE ExistingTable Add NewColumn INT NOT NULL DEFAULT 1 The above statement will add a column with a 1 value to the existing records. In the below table I measured the performance difference between above two statements. Parameter Nullable Column Default Value CPU 31 702 Duration 129 ms 6653 ms Reads 38 116,397 Writes 6 1329 Row Count 0 100000 If you look at the RowCount parameter, you can clearly see the difference. Though column is added in the first case, none of the rows are affected while in the second case all the rows are updated. That is the reason, why it has taken more duration and CPU to add column with Default value. We can verify this by several methods. Number of Pages The number of data pages can be obtained by using DBCC IND command. Though, this an undocumented dbcc command, many experts are ok to use this command in production. However, since there is no official word from Microsoft, use this “at your own risk”. DBCC IND (AddColumn,ExistingTable,1) Before Adding the Columns 637 Adding a Column with NULL 637 Adding a column with DEFAULT value 1270 This clearly shows that pages are physically modified. Please note, a high value indicated in the Adding a column with DEFAULT value  column is also a result of page splits. Continues…

    Read the article

  • The Case of the Missing Date/Time Stamp: Reporting Services 2008 R2 Snapshots

    - by smisner
    This week I stumbled upon an undocumented “feature” in SQL Server 2008 R2 Reporting Services as I was preparing a demonstration on how to set up and use report snapshots. If you’re familiar with the main changes in this latest release of Reporting Services, you probably already know that Report Manager got a facelift this time around. Although this facelift was generally a good thing, one of the casualties – in my opinion – is the loss of the snapshot label that served two purposes… First, it flagged the report as a snapshot. Second, it let you know when that snapshot was created. As part of my standard operating procedure when demonstrating report snapshots, I point out this label, so I was rather taken aback when I didn’t see it in the demonstration I was preparing. It sort of upset my routine, and I’m rather partial to my routines. I thought perhaps I wasn’t looking in the right place and changed Report Manager from Tile View to Detail View, but no – that label was still missing. In the grand scheme of life, it’s not an earth-shattering change, but you’ll have to look at the Modified Date in Details View to know when the snapshot was run. Or hope that the report developer included a textbox to show the execution time in the report. (Hint: this is a good time to add this to your list of report development best practices, whether a report gets set up as a report snapshot or not!) A snapshot from the past In case you don’t remember how a snapshot appeared in Report Manager back in the old days (of SQL Server 2008 and earlier), here’s an image I snagged from my Reporting Services 2008 Step by Step manuscript: A snapshot in the present A report server running in SharePoint integrated mode had no such label. There you had to rely on the Report Modified date-time stamp to know the snapshot execution time. So I guess all platforms are now consistent. Here’s a screenshot of Report Manager in the 2008 R2 version. One of these is a snapshot and the rest execute on demand. Can you tell which is the snapshot? Consider descriptions as an alternative So my report snapshot demonstration has one less step, and I’ll need to edit the Denali version of the Step by Step book. Things are simpler this way, but I sure wish we had an easier way to identify the execution methods of the reports. Consider using the description field to alert users that the report is a snapshot. It might save you a few questions about why the data isn’t up-to-date if the users know that something changed in the source of the report. Notice that the full description doesn’t display in Tile View, so keep it short and sweet or instruct users to open Details View to see the entire description.

    Read the article

  • Turnkey with LightSwitch

    - by Laila
    Microsoft has long wanted to find a replacement for Microsoft Access. The best attempt yet, which is due out in, or before, September is Visual Studio LightSwitch, with which it is said to be as 'easy as flipping a switch' to use Silverlight to create simple form-driven business applications. It is easy to get confused by the various initiatives from Microsoft. No, this isn't WebMatrix. There is no 'Razor', for this isn't meant for cute little ecommerce sites, but is designed to build simple database-applications of the card-box type. It is more clearly a .NET-based solution to the problem that every business seems to suffer from; the plethora of Access-based, and Excel-based 'private' and departmental database-applications. These are a nightmare for any IT department since they are often 'stealth' applications built by the business in the teeth of opposition from the IT Department zealots. As they are undocumented, it is scarily easy to bring a whole department into disarray by decommissioning a PC tucked under a desk somewhere. With LightSwitch, it is easy to re-write such applications in a standard, maintainable, way, using a SQL Server database, deployed somewhere reasonably safe such as Azure. Even Sharepoint or Windows Communication Foundation can be used as data sources. Oracle's ApEx has taken off remarkably well, and has shaken the perception that, for the business user, Oracle must remain a mystic force accessible only to the priests and acolytes. Microsoft, by comparison had only Access, which was first released in 1992, the year of the Madonna conical bustier. It looks just as dated. Microsoft badly needed an entirely new solution to the same business requirement that led to Access's and Foxpro's long-time popularity, but which had the same allure as ApEx. LightSwitch is sound in its ideas, and comfortingly conventional in its architecture. By giving an easy access to SQL Server databases, and providing a 'thumb and blanket' migration path to Access-heads, LightSwitch seems likely to offer a simple way of pulling more Microsoft users into the .NET community. If Microsoft puts its weight behind it, then it will give some glimmer of hope to the many Silverlight developers that Microsoft is capable of seeing through its .NET revolution.

    Read the article

  • Current State EA: Focus on the Integration!!!

    - by Eric A. Stephens
    A recent project has me at the front end of a large implementation effort covering multiple software components. In addition to the challenges of integrating 15-20 separate and new software components there is the challenge of integrating the portfolio into an existing environment. Like other clients I've worked with and other environments I've worked in for many years, this is typical. The applications are undocumented and under patched leading to a mystery for any architect leading change.  We can boil down most architecture development methodologies (ADM) into first understanding the current/baseline state and then envisioning one or more future states. Many pundits emphasize the need to focus on the future/target states. I agree since enterprise architecture (EA) is about where you are going and not so much where you have been. But to be effective in the future, I contend some focused time needs to be spent on the current state. And specifically on the integration. Integration is always the difficult part of a project (I might put it more coarsely at a cocktail party). While I don't have a case study, my anecdotal experience suggests poorly integrated application portfolios tend to cost more to operate and create entropy when trying to respond to new changes and opportunities. In the aforementioned project, I was able to get one of our EAs assigned to focus on just integration almost immediately. While we're still early in the process, this EA is uncovering all sorts of information that will greatly assist our future state planning for this solution. This information is driving early decision making that we anticipate will accelerate our efforts moving forward. #next_pages_container { width: 5px; hight: 5px; position: absolute; top: -100px; left: -100px; z-index: 2147483647 !important; } #next_pages_container { width: 5px; hight: 5px; position: absolute; top: -100px; left: -100px; z-index: 2147483647 !important; } #next_pages_container { width: 5px; hight: 5px; position: absolute; top: -100px; left: -100px; z-index: 2147483647 !important; } #next_pages_container { width: 5px; hight: 5px; position: absolute; top: -100px; left: -100px; z-index: 2147483647 !important; }

    Read the article

  • Accessing network shares on Windows7 via SonicWall VPN client

    - by Jack Lloyd
    I'm running Windows7 x64 (fully patched) and the SonicWall 4.2.6.0305 client (64-bit, claims to support Windows7). I can login to the VPN and access network resources (eg SSH to a machine that lives behind the VPN). However I cannot seem to be able to access shared filesystems. Windows is refusing to do discovery on the VPN network. I suspect part of the problem is Windows persistently considers the VPN connection to be a 'public network'. Normally, you can open the network and sharing center and modify this setting, however it does not give me a choice for the VPN. So I did the expedient thing and turned on file sharing for public networks. I also disabled the Windows firewall for good measure. Still no luck. I can access the server directly by putting \\192.168.1.240 in the taskbar, which brings up the list of shares on the server. However, trying to open any of the shares simply tells me "Windows cannot access \\192.168.1.240\share You do not have permission to access ..."; it never asks for a domain password. I also tried Windows7 native VPN functionality - it couldn't successfully connect to the VPN at all. I suspect this is because SonicWall is using some obnoxious special/undocumented authentication system; I had similar problems trying to connect on Linux with the normal IPsec tools there. What magical invocation or control panel option am I missing that will let this work? Are there any reasonable debugging strategies? I'm feeling quite frustrated at Windows tendency to not give me much useful information that might let me understand what it is trying to do and what is going wrong.

    Read the article

  • Accessing network shares on Windows7 via SonicWall VPN client

    - by Jack Lloyd
    I'm running Windows7 x64 (fully patched) and the SonicWall 4.2.6.0305 client (64-bit, claims to support Windows7). I can login to the VPN and access network resources (eg SSH to a machine that lives behind the VPN). However I cannot seem to be able to access shared filesystems. Windows is refusing to do discovery on the VPN network. I suspect part of the problem is Windows persistently considers the VPN connection to be a 'public network'. Normally, you can open the network and sharing center and modify this setting, however it does not give me a choice for the VPN. So I did the expedient thing and turned on file sharing for public networks. I also disabled the Windows firewall for good measure. Still no luck. I can access the server directly by putting \\192.168.1.240 in the taskbar, which brings up the list of shares on the server. However, trying to open any of the shares simply tells me "Windows cannot access \\192.168.1.240\share You do not have permission to access ..."; it never asks for a domain password. I also tried Windows7 native VPN functionality - it couldn't successfully connect to the VPN at all. I suspect this is because SonicWall is using some obnoxious special/undocumented authentication system; I had similar problems trying to connect on Linux with the normal IPsec tools there. What magical invocation or control panel option am I missing that will let this work? Are there any reasonable debugging strategies? I'm feeling quite frustrated at Windows tendency to not give me much useful information that might let me understand what it is trying to do and what is going wrong.

    Read the article

  • Shrinking a large transaction log on a full drive

    - by Sam
    Someone fired off an update statement as part of some maintenance which did a cross join update on two tables with 200,000 records in each. That's 40 trillion statements, which would explain part of how the log grew to 200GB. I also did not have the log file capped, which is another problem I will be taking care of server wide - where we have almost 200 databases residing. The 'solution' I used was to backup the database, backup the log with truncate_only, and then backup the database again. I then shrunk the log file and set a cap on the log. Seeing as there were other databases using the log drive, I was in a bit of a rush to clean it out. I might have been able to back the log file up to our backup drive, hoping that no other databases needed to grow their log file. Paul Randal from http://technet.microsoft.com/en-us/magazine/2009.02.logging.aspx Under no circumstances should you delete the transaction log, try to rebuild it using undocumented commands, or simply truncate it using the NO_LOG or TRUNCATE_ONLY options of BACKUP LOG (which have been removed in SQL Server 2008). These options will either cause transactional inconsistency (and more than likely corruption) or remove the possibility of being able to properly recover the database. Were there any other options I'm not aware of?

    Read the article

  • What is the oldest hardware still in production use? How is it kept running?

    - by sleske
    In the spirit of the question What is your oldest hardware that still works?, I'd like to ask: What is the oldest hardware you know that is still in production use? And what challenges did you (or someone else) face in keeping it running (scarce documentation, no support, no spare parts available...)? Most organizations will retire / upgrade software and hardware after 5-10 years, but sometimes old software is kept running on old boxes, because it "just works". I once worked at a client site that was running a critical piece of (in-house developed) business software on a single server running HP-UX. The server was old (ca. 12-13 years), but fortunately still running without problems; however, getting spares would have been very difficult, and since software installation was undocumented, any significant system changes or even new hardware might have caused significant downtime and data loss. We eventually managed to replace it, but this is not always possible. I also read that many organizations still run decade-old mainframe hardware, particularly for highly customized systems controlling industrial machines or power plants. Which old hardware have you encountered? How did you manage these challenges? Related question: http://serverfault.com/questions/82467/should-old-servers-be-retired

    Read the article

  • Log and debug/decrypt an windows application's HTTPS traffic

    - by cweiske
    I've got a proprietary windows-only application that uses HTTPS to speak with a (also proprietary, undocumented) web service. To ultimately be able to use the web service's functionality on my linux machines, I want to reverse-engineer the web service API by analyzing the requests sent by the application. Now the question: How can I decrypt and log the HTTPS traffic? I know of several solutions which don't apply in my case: Fiddler is a man-in-the-middle HTTPS proxy which I cannot use since the application doesn't support proxies. Also, I do not (yet) know if it works with self-signed server certificates, which I doubt. Wireshark is able to decrypt SSL streams if you have the server's private certificate, which I don't have. any browser extension since the application is not a browser If I remember correctly, there have been some trojans that capture online banking information by hooking into/replacing the window's crypto API. Since the machine is mine, low level changes are possible. Maybe there is a non-trojan (white-hat) network log application out there which does the same? There is a blackhat presentation with some details available to read. They refer to Microsoft Research Detour for easy API hooking.

    Read the article

  • SQL SERVER – Data Pages in Buffer Pool – Data Stored in Memory Cache

    - by pinaldave
    This will drop all the clean buffers so we will be able to start again from there. Now, run the following script and check the execution plan of the query. Have you ever wondered what types of data are there in your cache? During SQL Server Trainings, I am usually asked if there is any way one can know how much data in a table is stored in the memory cache? The more detailed question I usually get is if there are multiple indexes on table (and used in a query), were the data of the single table stored multiple times in the memory cache or only for a single time? Here is a query you can run to figure out what kind of data is stored in the cache. USE AdventureWorks GO SELECT COUNT(*) AS cached_pages_count, name AS BaseTableName, IndexName, IndexTypeDesc FROM sys.dm_os_buffer_descriptors AS bd INNER JOIN ( SELECT s_obj.name, s_obj.index_id, s_obj.allocation_unit_id, s_obj.OBJECT_ID, i.name IndexName, i.type_desc IndexTypeDesc FROM ( SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id ,allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.hobt_id AND (au.type = 1 OR au.type = 3) UNION ALL SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id, allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.partition_id AND au.type = 2 ) AS s_obj LEFT JOIN sys.indexes i ON i.index_id = s_obj.index_id AND i.OBJECT_ID = s_obj.OBJECT_ID ) AS obj ON bd.allocation_unit_id = obj.allocation_unit_id WHERE database_id = DB_ID() GROUP BY name, index_id, IndexName, IndexTypeDesc ORDER BY cached_pages_count DESC; GO Now let us run the query above and observe the output of the same. We can see in the above query that there are four columns. Cached_Pages_Count lists the pages cached in the memory. BaseTableName lists the original base table from which data pages are cached. IndexName lists the name of the index from which pages are cached. IndexTypeDesc lists the type of index. Now, let us do one more experience here. Please note that you should not run this test on a production server as it can extremely reduce the performance of the database. DBCC DROPCLEANBUFFERS This will drop all the clean buffers and we will be able to start again from there. Now run following script and check the execution plan for the same. USE AdventureWorks GO SELECT UnitPrice, ModifiedDate FROM Sales.SalesOrderDetail WHERE SalesOrderDetailID BETWEEN 1 AND 100 GO The execution plans contain the usage of two different indexes. Now, let us run the script that checks the pages cached in SQL Server. It will give us the following output. It is clear from the Resultset that when more than one index is used, datapages related to both or all of the indexes are stored in Memory Cache separately. Let me know what you think of this article. I had a great pleasure while writing this article because I was able to write on this subject, which I like the most. In the next article, we will exactly see what data are cached and those that are not cached, using a few undocumented commands. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL DMV

    Read the article

  • Best Practices for Handing over Legacy Code

    - by PersonalNexus
    In a couple of months a colleague will be moving on to a new project and I will be inheriting one of his projects. To prepare, I have already ordered Michael Feathers' Working Effectively with Legacy Code. But this books as well as most questions on legacy code I found so far are concerned with the case of inheriting code as-is. But in this case I actually have access to the original developer and we do have some time for an orderly hand-over. Some background on the piece of code I will be inheriting: It's functioning: There are no known bugs, but as performance requirements keep going up, some optimizations will become necessary in the not too distant future. Undocumented: There is pretty much zero documentation at the method and class level. What the code is supposed to do at a higher level, though, is well-understood, because I have been writing against its API (as a black-box) for years. Only higher-level integration tests: There are only integration tests testing proper interaction with other components via the API (again, black-box). Very low-level, optimized for speed: Because this code is central to an entire system of applications, a lot of it has been optimized several times over the years and is extremely low-level (one part has its own memory manager for certain structs/records). Concurrent and lock-free: While I am very familiar with concurrent and lock-free programming and have actually contributed a few pieces to this code, this adds another layer of complexity. Large codebase: This particular project is more than ten thousand lines of code, so there is no way I will be able to have everything explained to me. Written in Delphi: I'm just going to put this out there, although I don't believe the language to be germane to the question, as I believe this type of problem to be language-agnostic. I was wondering how the time until his departure would best be spent. Here are a couple of ideas: Get everything to build on my machine: Even though everything should be checked into source code control, who hasn't forgotten to check in a file once in a while, so this should probably be the first order of business. More tests: While I would like more class-level unit tests so that when I will be making changes, any bugs I introduce can be caught early on, the code as it is now is not testable (huge classes, long methods, too many mutual dependencies). What to document: I think for starters it would be best to focus documentation on those areas in the code that would otherwise be difficult to understand e.g. because of their low-level/highly optimized nature. I am afraid there are a couple of things in there that might look ugly and in need of refactoring/rewriting, but are actually optimizations that have been out in there for a good reason that I might miss (cf. Joel Spolsky, Things You Should Never Do, Part I) How to document: I think some class diagrams of the architecture and sequence diagrams of critical functions accompanied by some prose would be best. Who to document: I was wondering what would be better, to have him write the documentation or have him explain it to me, so I can write the documentation. I am afraid, that things that are obvious to him but not me would otherwise not be covered properly. Refactoring using pair-programming: This might not be possible to do due to time constraints, but maybe I could refactor some of his code to make it more maintainable while he was still around to provide input on why things are the way they are. Please comment on and add to this. Since there isn't enough time to do all of this, I am particularly interested in how you would prioritize.

    Read the article

  • OOW 2013 Summary for Fusion Middleware Architects & Administrators by Simon Haslam

    - by JuergenKress
    OOW 2013 Summary for Fusion Middleware Architects & Administrators by Simon Haslam This September during Oracle OpenWorld 2013 the weather in San Francisco, as you see can from the photo, was exceptionally sunny. The dramatic final few days of the Americas Cup sailing competition, being held every day in the bay, coincided with the conference and meant that there was almost a holiday feel to the whole event. Here's my annual round-up of what I think was most interesting at OpenWorld 2013 for Fusion Middleware architects and administrators; I hope you find it useful and if you think I've missed something please add a comment! WebLogic and Cloud Application Foundation (CAF) The big WebLogic release of the year has already happened a few months ago with 12.1.2 so I won't duplicate that here. Will Lyons discussed the WebLogic and Coherence roadmap which essentially is that 12.1.3 will probably be released to coincide with SOA 12c next year and that 12.1.4, the next feature-rich WebLogic release, is more likely to be in 2015. This latter release will probably include full Java EE 7 support, have enhancements for multi-tenancy and further auto-scaling features to support increased density (i.e. more WebLogic usage for the same amount of hardware). There's a new Oracle Virtual Assembly Builder (OVAB) out already and an Oracle Traffic Director (OTD) 12c release round the corner too. Also of relevance to administrators is that Oracle has increased the support lifetime for Fusion Middleware 11g (e.g. WebLogic 10.3.6) so that Premier Support will now run to the end of 2018 and Extended Support until 2021 - this should remove any Oracle-driven pressure to upgrade at least. Java Mission Control Java Mission Control (JMC) is the HotSpot Java 7 version of JRockit 6 Mission Control, a very nice performance monitoring tool from Oracle's BEA acquisition. Flight Recorder is a feature built into the JVM which records diagnostic events into, typically, a circular buffer which can then be used for historical analysis, particularly in the case of a JVM crash or hang. It's been available separately for WebLogic only for perhaps a year now but, more significantly, it now includes JVM events and was bundled in with JDK7 Update 40 a few weeks ago. I attended a couple of interesting Java One sessions on JMC/Flight Recorder and have to say it's looking really good - it has all the previous JRMC features except for memory leak detector, plus some enhancements around operative sets and ECID filtering I think. Marcus also showed how you could add your own events into flight recorder by building your own event class - they are then available for graphing alongside all the other events in JMC. This uses a currently an unsupported/undocumented API, but it's also the same one that WebLogic uses for WLDF events so I imagine it is stable. I'm not sure quite whether this would be useful to custom applications, as opposed to infrastructure services or ISV packaged applications, but it was a very nice demonstration. I've been testing JMC / FR enabling on several environments recently and my confidence is growing - it feels robust and I think could very soon be part of my standard builds. Read the full article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: OOW,Simon Haslam,Oracle OpenWorld,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • The SmartAssembly Rearchitecture

    - by Simon Cooper
    You may have noticed that not a lot has happened to SmartAssembly in the past few months. However, the team has been very busy behind the scenes working on an entirely new version of SmartAssembly. SmartAssembly 6.5 Over the past few releases of SmartAssembly, the team had come to the realisation that the current 'architecture' - grown organically, way before RedGate bought it, from a simple name obfuscator over the years into a full-featured obfuscator and assembly instrumentation tool - was simply not up to the task. Not for what we wanted to do with it at the time, and not what we have planned for the future. Not only was it not up to what we wanted it to do, but it was severely limiting our development capabilities; long-standing bugs in the root architecture that couldn't be fixed, some rather...interesting...design decisions, and convoluted logic that increased the complexity of any bugfix or new feature tenfold. So, we set out to fix this. Earlier this year, a new engine was written on which SmartAssembly would be based. Over the following few months, each feature was ported over to the new engine and extensively tested by our existing unit and integration tests. The engine was linked into the existing UI (no easy task, due to the tight coupling between the UI and old engine), and existing RedGate products were tested on the new SmartAssembly to ensure the new engine acted in the same way. The result is SmartAssembly 6.5. The risks of a rearchitecture Are there risks to rearchitecting a product like SmartAssembly? Of course. There was a lot of undocumented behaviour in the old engine, and as part of the rearchitecture we had to find this behaviour, define it, and document it. In the process we found some behaviour of the old engine that simply did not make sense; hence the changes in pruning & obfuscation behaviour in the release notes. All the special edge cases we had to find, document, and re-implement. There was a chance that these special cases would not be found until near the end of the project, when everything is functionally complete and interacting together. By that stage, it would be hard to go back and change anything without a whole lot of extra work, delaying the release by months. We always knew this was a possibility; our initial estimate of the time required was '4 months, ± 4 months'. And that was including various mitigation strategies to reduce the likelihood of these issues being found right at the end. Fortunately, this worst-case did not happen. However, the rearchitecture did produce some benefits. As well as numerous bug fixes that we could not fix any other way, we've also added logging that lets you find out exactly why a particular field or property wasn't pruned or obfuscated. There's a new command line interface, we've tested it with WP7.1 and Silverlight 5, and we've added a new option to error reporting to improve the performance of instrumented apps by ~10%, at the cost of inaccurate line numbers in reports. So? What differences will I see? Largely none. SmartAssembly 6.5 produces the same output as SmartAssembly 6.2. The performance of 6.5 will be much faster for some users, and generally the same as 6.2 for the remaining. If you've encountered a bug with previous versions of SmartAssembly, I encourage you to try 6.5, as it has most likely been fixed in the rearchitecture. If you encounter a bug with 6.5, please do tell us; we'll be doing another release quite soon, so we'll aim to fix any issues caused by 6.5 in that release. Most importantly, the new architecture finally allows us to implement some Big Things with SmartAssembly we've been planning for many months; these will fundamentally change how you build, release and monitor your application. Stay tuned for further updates!

    Read the article

  • Handy SQL Server Functions Series (HSSFS) Part 2.0 - Prelude to Parsing Patterns Properly

    - by Most Valuable Yak (Rob Volk)
    In Part 1 of the series I wrote about 2 lesser-known and somewhat undocumented functions. In this part, I'm going to cover some familiar string functions like Substring(), Parsename(), Patindex(), and Charindex() and delve into their strengths and weaknesses. I'm also splitting this part up into sub-parts to help focus on a particular technique and/or problem with the technique, hence the Part 2.0. Consider this a composite post, or com-post, if you will. (It may just turn out to be a pile of sh_t after all) I'll be using a contrived example, perhaps the most frustratingly useful, or usefully frustrating, function in SQL Server: @@VERSION. Contrived, because there are better ways to get the information (which I'll cover later); frustrating, because of the way Microsoft formatted the value; and useful because it does have 1 or 2 bits of information not found elsewhere. First let's take a look at the output of @@VERSION: Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (Intel X86) Apr 2 2010 15:53:02 Copyright (c) Microsoft Corporation Developer Edition on Windows NT 5.1 <X86> (Build 2600: Service Pack 3) There are 4 lines, with lines 2-4 indented with a tab character.  In case your browser (or this blog software) doesn't show it correctly, I gave each line a different color.  While this PRINTs nicely, if you SELECT @@VERSION in grid mode it all runs together because it ignores carriage return/line feed (CR/LF) characters.  Not fatal, but annoying. Note that @@VERSION's output will vary depending on edition and version of SQL Server, and also the OS it's installed on.  Despite the differences, the output is laid out the same way and the relevant pieces are in the same order. I'll be using the following view for Parts 2.1 onward, so we have a nice collection of @@VERSION information: create view version(SQLVersion,VersionString) AS ( select 2000, 'Microsoft SQL Server 2000 - 8.00.2055 (Intel X86) Dec 16 2008 19:46:53 Copyright (c) 1988-2003 Microsoft Corporation Developer Edition on Windows NT 5.1 (Build 2600: Service Pack 3)' union all select 2005, 'Microsoft SQL Server 2005 - 9.00.4053.00 (Intel X86) May 26 2009 14:24:20 Copyright (c) 1988-2005 Microsoft Corporation Developer Edition on Windows NT 5.1 (Build 2600: Service Pack 3)' union all select 2008, 'Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (Intel X86) Apr 2 2010 15:53:02 Copyright (c) Microsoft Corporation Developer Edition on Windows NT 5.1 <X86> (Build 2600: Service Pack 3)' union all select 2005, 'Microsoft SQL Server 2005 - 9.00.3080.00 (Intel X86) Sep 6 2009 01:43:32 Copyright (c) 1988-2005 Microsoft Corporation Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 2)' union all select 2008, 'Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64) Apr 2 2010 15:48:46 Copyright (c) Microsoft Corporation Developer Edition (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)' union all select 2008, 'Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64) Apr 2 2010 15:48:46 Copyright (c) Microsoft Corporation Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)' ) Feel free to add your own @@VERSION info if it's not already there. In Part 2.1 I'll focus on extracting the SQL Server version number (10.50.1600.1 in first example) and the Edition (Developer), but will have a solution that works with all versions.  Stay tuned!

    Read the article

  • Forcing UIInterfaceOrientation changes on iPhone

    - by Andiih
    I'm strugging with getting an iPhone application which requires just about every push or pop in the Nav Controller Stack to change orientation. Basically the first view is portrait, the second landscape the third portrait again (Yes I know this is less than ideal, but that's the design and I've got to implement it). I've been through various advice on here.... http://stackoverflow.com/questions/995723/how-do-i-detect-a-rotation-on-the-iphone-without-the-device-autorotating http://stackoverflow.com/questions/1824682/force-portrait-orientation-on-pushing-new-view-to-uinavigationviewcontroller http://stackoverflow.com/questions/181780/is-there-a-documented-way-to-set-the-iphone-orientation But without total success. Setting to link against 3.1.2 my reading of the linked articles above seems to indicate that if my portrait view pushes a view with - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Return YES for supported orientations return ((interfaceOrientation == UIInterfaceOrientationLandscapeRight) ); } Then then that view should appear rotated to landscape. What happens is it appears in its "broken" portrait form, then rotates correctly as the device is turned. If I pop the controller back to my portrait view (which has an appropriate shouldAutoRotate...) then that remains in broken landscape view until the device is returned to portrait orientation. I've also tried removing all the shouldautorotate messages, and instead forcing rotation by transforming the view. This kind of works, and I've figured out that by moving the status bar (which is actually hidden in my application) [UIApplication sharedApplication].statusBarOrientation = UIInterfaceOrientationLandscapeRight; the keyboard will appear with the correct orientation when desired. The problem with this approach is that the status bar transform is weird and ugly when you don't have a status bar - a shadow looms over the page with each change. So. What am I missing. 1) Am I wrong in thinking that in 3.1.2 (or possibly earlier) shouldAutorotateToInterfaceOrientation should provide the desired orientation simply by pushing controllers ? 2) Is there another way of getting keyboards to appear in the correct orientation. 3) Are the undocumented API calls the way to go (please no!)

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9  | Next Page >