Search Results

Search found 11671 results on 467 pages for 'man pages'.

Page 265/467 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • SQL SERVER – Size of Index Table for Each Index – Solution 3 – Powershell

    - by pinaldave
    Laerte Junior If you are a Powershell user, the name of the Laerte Junior is not a new name. He is the one man with exceptional knowledge of Powershell. He is not only very knowledgeable, but also very kind and eager to those in need. I have been attempting to setup Powershell for many days, but constantly facing issues. I was not able to get going with this tool. Finally, yesterday I sent email to Laerte in response to his comment posted here. Within 5 minutes, Laerte came online and helped me with the solution. He spend nearly 15 minutes working along with me to solve my problem with installation. And yes, he did resolve it remotely without looking at my screen – What a skilled and exceptional person!! I will soon post a detail note about the issue I faced and resolved with the help of Laerte. Here is his solution to my earlier puzzle in his own words. Read the original puzzle here and Laerte’s solution from here. Hi Pinal, I do not say better, but maybe another approach to enthusiasts in powershell and SQLSPX library would be: 1 – All indexes in all tables and all databases Get-SqlDatabase -sqlserver “Yourserver” | Get-SqlTable | Get-SqlIndex | Format-table Server,dbname,schema,table,name,id,spaceused 2 – All Indexes in all tables and specific database Get-SqlDatabase -sqlserver “Yourserver” “Yourdb” | Get-SqlTable | Get-SqlIndex | Format-table Server,dbname,schema,table,name,id,spaceused 3 – All Indexes in specific table and database Get-SqlDatabase -sqlserver “Yourserver” “Yourdb” | Get-SqlTable “YourTable” | Get-SqlIndex | Format-table Server,dbname,schema,table,name,id,spaceused and to output to txt.. pipe Out-File Get-SqlDatabase -sqlserver “Yourserver” | Get-SqlTable | Get-SqlIndex | Format-table Server,dbname,schema,table,name,id,spaceused | out-file c:\IndexesSize.txt If you have one txt with all your servers, can be for all of them also. Lets say you have all your servers in servers.txt: something like NameServer1 NameServer2 NameServer3 NameServer4 We could Use : foreach ($Server in Get-content c:\temp\servers.txt) { Get-SqlDatabase -sqlserver $Server | Get-SqlTable | Get-SqlIndex | Format-table Server,dbname,schema,table,name,id,spaceused } :) After fixing my issue with Powershell, I ran Laerte‘s second suggestion – “All Indexes in all tables and specific database” and found the following accurate output. Click to Enlarge Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Powershell

    Read the article

  • SQL SERVER – Size of Index Table for Each Index – Solution 3 – Powershell

    - by pinaldave
    Laerte Junior If you are a Powershell user, the name of the Laerte Junior is not a new name. He is the one man with exceptional knowledge of Powershell. He is not only very knowledgeable, but also very kind and eager to those in need. I have been attempting to setup Powershell for many days, but constantly facing issues. I was not able to get going with this tool. Finally, yesterday I sent email to Laerte in response to his comment posted here. Within 5 minutes, Laerte came online and helped me with the solution. He spend nearly 15 minutes working along with me to solve my problem with installation. And yes, he did resolve it remotely without looking at my screen – What a skilled and exceptional person!! I will soon post a detail note about the issue I faced and resolved with the help of Laerte. Here is his solution to my earlier puzzle in his own words. Read the original puzzle here and Laerte’s solution from here. Hi Pinal, I do not say better, but maybe another approach to enthusiasts in powershell and SQLSPX library would be: 1 – All indexes in all tables and all databases Get-SqlDatabase -sqlserver “Yourserver” | Get-SqlTable | Get-SqlIndex | Format-table Server,dbname,schema,table,name,id,spaceused 2 – All Indexes in all tables and specific database Get-SqlDatabase -sqlserver “Yourserver” “Yourdb” | Get-SqlTable | Get-SqlIndex | Format-table Server,dbname,schema,table,name,id,spaceused 3 – All Indexes in specific table and database Get-SqlDatabase -sqlserver “Yourserver” “Yourdb” | Get-SqlTable “YourTable” | Get-SqlIndex | Format-table Server,dbname,schema,table,name,id,spaceused and to output to txt.. pipe Out-File Get-SqlDatabase -sqlserver “Yourserver” | Get-SqlTable | Get-SqlIndex | Format-table Server,dbname,schema,table,name,id,spaceused | out-file c:\IndexesSize.txt If you have one txt with all your servers, can be for all of them also. Lets say you have all your servers in servers.txt: something like NameServer1 NameServer2 NameServer3 NameServer4 We could Use : foreach ($Server in Get-content c:\temp\servers.txt) { Get-SqlDatabase -sqlserver $Server | Get-SqlTable | Get-SqlIndex | Format-table Server,dbname,schema,table,name,id,spaceused } :) After fixing my issue with Powershell, I ran Laerte‘s second suggestion – “All Indexes in all tables and specific database” and found the following accurate output. Click to Enlarge Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Powershell

    Read the article

  • Twitter traffic might not be what it seems

    - by Piet
    Are you using bit.ly stats to measure interest in the links you post on twitter? I’ve been hearing for a while about people claiming to get the majority of their traffic originating from twitter these days. Now, I’ve been playing with the twitter ruby gem recently, doing various experiments which I’ll not go into detail here because they could be regarded as spamming… if I’d conduct them on a large scale, that is. It’s scary to see people actually engaging with @replies crafted with some regular expressions and eliza-like trickery on status updates found using the twitter api. I’m wondering how Twitter is going to contain the coming spam-flood. When posting links I used bit.ly as url shortener, since this one seems to be the de-facto standard on twitter. A nice thing about bit.ly is that it shows some basic stats about the redirects it performs for your shortened links. To my surprise, most links posted almost immediately resulted in several visitors. Now, seeing that I was posting the links together with some information concerning what the link is about, I concluded that the people who were actually clicking the links should be very targeted visitors. This felt a bit like free adwords, and I suddenly started to understand why everyone was raving about getting traffic from twitter. How wrong I was! (and I think several 1000 online marketers with me) On the destination site I used a traffic logging solution that works by including a little javascript snippet in your pages. It seemed that somehow all visitors disappeared after the bit.ly redirect and before getting to the site, because I was hardly seeing any visitors there. So I started investigating what was happening: by looking at the logfiles of the destination site, and by making my own ’shortened’ urls by doing redirects using a very short domain name I own. This way, I could check the apache access_log before the redirects. Most user agents turned out to be bots without a doubt. Here’s an excerpt of user-agents awk’ed from apache’s access_log for a time period of about one hour, right after posting some links: AideRSS 2.0 (postrank.com) Java/1.6.0_13 Java/1.6.0_14 libwww-perl/5.816 MLBot (www.metadatalabs.com/mlbot) Mozilla/4.0 (compatible;MSIE 5.01; Windows -NT 5.0 - real-url.org) Mozilla/5.0 (compatible; Twitturls; +http://twitturls.com) Mozilla/5.0 (compatible; Viralheat Bot/1.0; +http://www.viralheat.com/) Mozilla/5.0 (Danger hiptop 4.6; U; rv:1.7.12) Gecko/20050920 Mozilla/5.0 (X11; U; Linux i686; en-us; rv:1.9.0.2) Gecko/2008092313 Ubuntu/9.04 (jaunty) Firefox/3.5 OpenCalaisSemanticProxy PycURL/7.18.2 PycURL/7.19.3 Python-urllib/1.17 Twingly Recon twitmatic Twitturly / v0.6 Wget/1.10.2 (Red Hat modified) Wget/1.11.1 (Red Hat modified) Of the few user-agents that seem ‘real’ at first, half are originating from an ip-address used by Amazon EC2. And I doubt people are setting op proxies on there. Oh yeah, Googlebot (the real deal, from a legit google owned address) is sucking up posted links like fresh oysters. I guess google is trying to make sure in advance to never be beaten by twitter in the ‘realtime search’ department. Actually, I think it’d be almost stupid NOT to post any new pages/posts/websites on Twitter, it must be one of the fastest ways to get a Googlebot visit. Same experiment with a real, established twitter account Now, because I was posting the url’s either as ’status’ messages or directed @people, on a test-account with hardly any (human) followers, I checked again using the twitter accounts from a commercial site I’m involved with. These accounts all have between 500 and 1000 targeted (I think) followers. I checked the destination access_logs and also added ‘my’ redirect after the bit.ly redirect: same results, although seemingly a bit higher real visitor/bot ratio. Btw: one of these account was ‘punished’ with a 1 week lock recently because the same (1 one!) status update was sent that was sent right before using another account. They got an email explaining the lock because the account didn’t act according to their TOS. I can’t find anything in their TOS about it, can you? I don’t think Twitter is on the right track punishing a legit account, knowing the trickery I had been doing with it’s api went totally unpunished. I might be wrong though, I often am. On the other hand: this commercial site reported targeted traffic and actual signups from visitors coming from Twitter. The ones that are really real visitors are also very targeted. I’m just not sure if the amount of work involved could hold up against an adwords campaign. Reposting the same link over and over again helps On thing I noticed: It helps to keep on reposting the same links with regular intervals. I guess most people only look at their first page when checking out recent posts of the ones they’re following, or don’t look too far back when performing a search. Now, this probably isn’t according to the twitter TOS. Actually, it might be spamming but no-one is obligated to follow anyone else of course. This way, I was getting more real visitors and less bots. To my surprise (when my programmer’s hat is on) there were still repeated visits from the same bots coming from the same ip-addresses. Did they expect to find something else when visiting for a 2nd or 3rd time? (actually,this gave me an idea: you can’t change a link once it’s posted, but you can change where it redirects to) Most bots were smart enough not to follow the same link again though. Are you successful in getting real visitors from Twitter? Are you only relying on bit.ly to provide traffic stats?

    Read the article

  • SQL SERVER – How to Recover SQL Database Data Deleted by Accident

    - by Pinal Dave
    In Repair a SQL Server database using a transaction log explorer, I showed how to use ApexSQL Log, a SQL Server transaction log viewer, to recover a SQL Server database after a disaster. In this blog, I’ll show you how to use another SQL Server disaster recovery tool from ApexSQL in a situation when data is accidentally deleted. You can download ApexSQL Recover here, install, and play along. With a good SQL Server disaster recovery strategy, data recovery is not a problem. You have a reliable full database backup with valid data, a full database backup and subsequent differential database backups, or a full database backup and a chain of transaction log backups. But not all situations are ideal. Here we’ll address some sub-optimal scenarios, where you can still successfully recover data. If you have only a full database backup This is the least optimal SQL Server disaster recovery strategy, as it doesn’t ensure minimal data loss. For example, data was deleted on Wednesday. Your last full database backup was created on Sunday, three days before the records were deleted. By using the full database backup created on Sunday, you will be able to recover SQL database records that existed in the table on Sunday. If there were any records inserted into the table on Monday or Tuesday, they will be lost forever. The same goes for records modified in this period. This method will not bring back modified records, only the old records that existed on Sunday. If you restore this full database backup, all your changes (intentional and accidental) will be lost and the database will be reverted to the state it had on Sunday. What you have to do is compare the records that were in the table on Sunday to the records on Wednesday, create a synchronization script, and execute it against the Wednesday database. If you have a full database backup followed by differential database backups Let’s say the situation is the same as in the example above, only you create a differential database backup every night. Use the full database backup created on Sunday, and the last differential database backup (created on Tuesday). In this scenario, you will lose only the data inserted and updated after the differential backup created on Tuesday. If you have a full database backup and a chain of transaction log backups This is the SQL Server disaster recovery strategy that provides minimal data loss. With a full chain of transaction logs, you can recover the SQL database to an exact point in time. To provide optimal results, you have to know exactly when the records were deleted, because restoring to a later point will not bring back the records. This method requires restoring the full database backup first. If you have any differential log backup created after the last full database backup, restore the most recent one. Then, restore transaction log backups, one by one, it the order they were created starting with the first created after the restored differential database backup. Now, the table will be in the state before the records were deleted. You have to identify the deleted records, script them and run the script against the original database. Although this method is reliable, it is time-consuming and requires a lot of space on disk. How to easily recover deleted records? The following solution enables you to recover SQL database records even if you have no full or differential database backups and no transaction log backups. To understand how ApexSQL Recover works, I’ll explain what happens when table data is deleted. Table data is stored in data pages. When you delete table records, they are not immediately deleted from the data pages, but marked to be overwritten by new records. Such records are not shown as existing anymore, but ApexSQL Recover can read them and create undo script for them. How long will deleted records stay in the MDF file? It depends on many factors, as time passes it’s less likely that the records will not be overwritten. The more transactions occur after the deletion, the more chances the records will be overwritten and permanently lost. Therefore, it’s recommended to create a copy of the database MDF and LDF files immediately (if you cannot take your database offline until the issue is solved) and run ApexSQL Recover on them. Note that a full database backup will not help here, as the records marked for overwriting are not included in the backup. First, I’ll delete some records from the Person.EmailAddress table in the AdventureWorks database.   I can delete these records in SQL Server Management Studio, or execute a script such as DELETE FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 Then, I’ll start ApexSQL Recover and select From DELETE operation in the Recovery tab.   In the Select the database to recover step, first select the SQL Server instance. If it’s not shown in the drop-down list, click the Server icon right to the Server drop-down list and browse for the SQL Server instance, or type the instance name manually. Specify the authentication type and select the database in the Database drop-down list.   In the next step, you’re prompted to add additional data sources. As this can be a tricky step, especially for new users, ApexSQL Recover offers help via the Help me decide option.   The Help me decide option guides you through a series of questions about the database transaction log and advises what files to add. If you know that you have no transaction log backups or detached transaction logs, or the online transaction log file has been truncated after the data was deleted, select No additional transaction logs are available. If you know that you have transaction log backups that contain the delete transactions you want to recover, click Add transaction logs. The online transaction log is listed and selected automatically.   Click Add if to add transaction log backups. It would be best if you have a full transaction log chain, as explained above. The next step for this option is to specify the time range.   Selecting a small time range for the time of deletion will create the recovery script just for the accidentally deleted records. A wide time range might script the records deleted on purpose, and you don’t want that. If needed, you can check the script generated and manually remove such records. After that, for all data sources options, the next step is to select the tables. Be careful here, if you deleted some data from other tables on purpose, and don’t want to recover them, don’t select all tables, as ApexSQL Recover will create the INSERT script for them too.   The next step offers two options: to create a recovery script that will insert the deleted records back into the Person.EmailAddress table, or to create a new database, create the Person.EmailAddress table in it, and insert the deleted records. I’ll select the first one.   The recovery process is completed and 11 records are found and scripted, as expected.   To see the script, click View script. ApexSQL Recover has its own script editor, where you can review, modify, and execute the recovery script. The insert into statements look like: INSERT INTO Person.EmailAddress( BusinessEntityID, EmailAddressID, EmailAddress, rowguid, ModifiedDate) VALUES( 70, 70, N'[email protected]' COLLATE SQL_Latin1_General_CP1_CI_AS, 'd62c5b4e-c91f-403f-b630-7b7e0fda70ce', '20030109 00:00:00.000' ); To execute the script, click Execute in the menu.   If you want to check whether the records are really back, execute SELECT * FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 As shown, ApexSQL Recover recovers SQL database data after accidental deletes even without the database backup that contains the deleted data and relevant transaction log backups. ApexSQL Recover reads the deleted data from the database data file, so this method can be used even for databases in the Simple recovery model. Besides recovering SQL database records from a DELETE statement, ApexSQL Recover can help when the records are lost due to a DROP TABLE, or TRUNCATE statement, as well as repair a corrupted MDF file that cannot be attached to as SQL Server instance. You can find more information about how to recover SQL database lost data and repair a SQL Server database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Make your CHM Help Files show HTML5 and CSS3 content

    - by Rick Strahl
    The HTML Help 1.0 specification aka CHM files, is pretty old. In fact, it's practically ancient as it was introduced in 1997 when Internet Explorer 4 was introduced. Html Help 1.0 is basically a completely HTML based Help system that uses a Help Viewer that internally uses Internet Explorer to render the HTML Help content. Because of its use of the Internet Explorer shell for rendering there were many security issues in the past, which resulted in locking down of the Web Browser control in Windows and also the Help Engine which caused some unfortunate side effects. Even so, CHM continues to be a popular help format because it is very easy to produce content for it, using plain HTML and because it works with many Windows application platforms out of the box. While there have been various attempts to replace CHM help files CHM files still seem to be a popular choice for many applications to display their help systems. The biggest alternative these days is no system based help at all, but links to online documentation. For Windows apps though it's still very common to see CHM help files and there are still a ton of CHM help out there and lots of tools (including our own West Wind Html Help Builder) that produce output for CHM files as well as Web output. Image is Everything and you ain't got it! One problem with the CHM engine is that it's stuck with an ancient Internet Explorer version for rendering. For example if you have help content that uses HTML5 or CSS3 content you might have an HTML Help topic like the following shown here in a full Web Browser instance of Internet Explorer: The page clearly uses some CSS3 features like rounded corners and box shadows that are rendered using plain CSS 3 features. Note that I used Internet Explorer on purpose here to demonstrate that IE9 on Windows 7 can properly render this content using some of the new features of CSS, but the same is true for all other recent versions of the major browsers (FireFox 3.1+, Safari 4.5+, WebKit 9+ etc.). Unfortunately if you take this nice and simple CSS3 content and run it through the HTML Help compiler to produce a CHM file the resulting output on the same machine looks a bit less flashy: All the CSS3 styling is gone and although the page display and functionality still works, but all the extra styling features are gone. This even though I am running this on a Windows 7 machine that has IE9 that should be able to render these CSS features. Bummer. Web Browser Control - perpetually stuck in IE 7 Mode The problem is the Web Browser/Shell Components in Windows. This component is and has been part of Windows for as long as Internet Explorer has been around, but the Web Browser control hasn't kept up with the latest versions of IE. In a nutshell the control is stuck in IE7 rendering mode for engine compatibility reasons by default. However, there is at least one way to fix this explicitly using Registry keys on a per application basis. The key point from that blog article is that you can override the IE rendering engine for a particular executable by setting one (or more) registry flags that tell the Windows Shell which version of the Internet Explorer rendering engine to load. An application that wishes to use a more recent version of Internet Explorer can then register itself during installation for the specific IE version desired and from then on the application will use that version of the Web Browser component. If the application is older than the specified version it falls back to the default version (IE 7 rendering). Forcing CHM files to display with IE9 (or later) Rendering Knowing that we can force the IE usage for a given process it's also possible to affect the CHM rendering by setting same keys on the executable that's hosting the CHM file. What that executable file is depends on the type of application as there are a number of ways that can launch the help engine. hh.exeThe standalone Windows CHM Help Viewer that launches when you launch a CHM from Windows Explorer. You can manually add hh.exe to the registry keys. YourApplication.exeIf you're using .NET or any tool that internally uses the hhControl ActiveX control to launch help content your application is your host. You should add your application's exe to the registry during application startup. foxhhelp9.exeIf you're building a FoxPro application that uses the built-in help features, foxhhelp9.exe is used to actually host the help controls. Make sure to add this executable to the registry. What to set You can configure the Internet Explorer version used for an application in the registry by specifying the executable file name and a value that specifies the IE version desired. There are two different sets of keys for 32 bit and 64 bit applications. 32 bit only or 64 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: hh.exe 32 bit on 64 bit machine: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: hh.exe Note that it's best to always set both values ideally when you install your application so it works regardless of which platform you run on. The value specified is a DWORD value and the interesting values are decimal 9000 for IE9 rendering mode depending on !DOCTYPE settings or 9999 for IE 9 standards mode always. You can use the same logic for 8000 and 8888 for IE8 and the final value of 7000 for IE7 (one has to wonder what they're going todo for version 10 to perpetuate that pattern). I think 9000 is the value you'd most likely want to use. 9000 means that IE9 will be used for rendering but unless the right doctypes are used (XHTML and HTML5 specifically) IE will still fall back into quirks mode as needed. This should allow existing pages to continue to use the fallback engine while new pages that have the proper HTML doctype set can take advantage of the newest features. Here's an example of how I set the registry keys in my Tarma Installmate registry configuration: Note that I set all three values both under the Software and Wow6432Node keys so that this works regardless of where these EXEs are launched from. Even though all apps are 32 bit apps, the 64 bit (the default one shown selected) key is often used. So, now once I've set the registry key for hh.exe I can now launch my CHM help file from Explorer and see the following CSS3 IE9 rendered display: Summary It sucks that we have to go through all these hoops to get what should be natural behavior for an application to support the latest features available on a system. But it shouldn't be a surprise - the Windows Help team (if there even is such a thing) has not been known for forward looking technologies. It's a pretty big hassle that we have to resort to setting registry keys in order to get the Web Browser control and the internal CHM engine to render itself properly but at least it's possible to make it work after all. Using this technique it's possible to ship an application with a help file and allow your CHM help to display with richer CSS markup and correct rendering using the stricter and more consistent XHTML or HTML5 doctypes. If you provide both Web help and in-application help (and why not if you're building from a single source) you now can side step the issue of your customers asking: Why does my help file look so much shittier than the online help… No more!© Rick Strahl, West Wind Technologies, 2005-2012Posted in HTML5  Help  Html Help Builder  Internet Explorer  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Emperors don’t come cheap

    - by RoyOsherove
    “Sorry” I replied in a polite email. “Maybe next year, when budgets allow for this”. It was addressed to the organizer of TechEd US, which was to be in New Orleans this year. Man, I would have loved to be in new Orleans this year, but, I guess these guys only understand one language – and I won’t be their puppy any more. You see, they wouldn’t pay for my business class flight to TechEd from Israel. Me– the great emperor of unit testing?! travelling coach for 12 hours? No thanks. I have better things to do! And this is after last year, they only invited me to have one talk throughout the conference. one talk. After the year before I was on the top ten speakers list of that conference?! No sir! They did give it a good try, though. They said they can pay up to 4,000$ per flight cost for me, and that they only found a flight at about 5460$. “Unacceptable” I told them when they asked if I would pay the difference. And that was that. Goodbye teched. As I closed up gmail, wondering if I should have told them that I found a similar flight at 4,300$, and came back to the living room, I told my wife, all full of myself “I just canceled teched”. “Oh good” she said. Not even looking at me as she tried to feed our one year old. “did you tell them you need to cancel because you already have another flight that month and your wife won’t let you travel more than once a month anymore?” “Yeah right” I said. Just what I need – for people to realize I’m totally whipped. I still need an ounce of dignity. “I told those bastards that if they want me they have to make an effort. People like me don’t come cheap, you know?” “You’re an idiot for not telling them the real reason.” She handed me the baby.  “What if they found a flight that matches their budget? How would you have gotten away from that engagement?” . She put on “Lost” on the media center and sat next to me. I did not reply.

    Read the article

  • ASP.NET and WIF: Showing custom profile username as User.Identity.Name

    - by DigiMortal
    I am building ASP.NET MVC application that uses external services to authenticate users. For ASP.NET users are fully authenticated when they are redirected back from external service. In system they are logically authenticated when they have created user profiles. In this posting I will show you how to force ASP.NET MVC controller actions to demand existence of custom user profiles. Using external authentication sources with AppFabric Suppose you want to be user-friendly and you don’t force users to keep in mind another username/password when they visit your site. You can accept logins from different popular sites like Windows Live, Facebook, Yahoo, Google and many more. If user has account in some of these services then he or she can use his or her account to log in to your site. If you have community site then you usually have support for user profiles too. Some of these providers give you some information about users and other don’t. So only thing in common you get from all those providers is some unique ID that identifies user in service uniquely. Image above shows you how new user joins your site. Existing users who already have profile are directed to users homepage after they are authenticated. You can read more about how to solve semi-authorized users problem from my blog posting ASP.NET MVC: Using ProfileRequiredAttribute to restrict access to pages. The other problem is related to usernames that we don’t get from all identity providers. Why is IIdentity.Name sometimes empty? The problem is described more specifically in my blog posting Identifying AppFabric Access Control Service users uniquely. Shortly the problem is that not all providers have claim called http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name. The following diagram illustrates what happens when user got token from AppFabric ACS and was redirected to your site. Now, when user was authenticated using Windows Live ID then we don’t have name claim in token and that’s why User.Identity.Name is empty. Okay, we can force nameidentifier to be used as name (we can do it in web.config file) but we have user profiles and we want username from profile to be shown when username is asked. Modifying name claim Now let’s force IClaimsIdentity to use username from our user profiles. You can read more about my profiles topic from my blog posting ASP.NET MVC: Using ProfileRequiredAttribute to restrict access to pages and you can find some useful extension methods for claims identity from my blog posting Identifying AppFabric Access Control Service users uniquely. Here is what we do to set User.Identity.Name: we will check if user has profile, if user has profile we will check if User.Identity.Name matches the name given by profile, if names does not match then probably identity provider returned some name for user, we will remove name claim and recreate it with correct username, we will add new name claim to claims collection. All this stuff happens in Application_AuthorizeRequest event of our web application. The code is here. protected void Application_AuthorizeRequest() {     if (string.IsNullOrEmpty(User.Identity.Name))     {         var identity = User.Identity;         var profile = identity.GetProfile();         if (profile != null)         {             if (profile.UserName != identity.Name)             {                 identity.RemoveName();                   var claim = new Claim("http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name", profile.UserName);                 var claimsIdentity = (IClaimsIdentity)identity;                 claimsIdentity.Claims.Add(claim);             }         }     } } RemoveName extension method is simple – it looks for name claims of IClaimsIdentity claims collection and removes them. public static void RemoveName(this IIdentity identity) {     if (identity == null)         return;       var claimsIndentity = identity as ClaimsIdentity;     if (claimsIndentity == null)         return;       for (var i = claimsIndentity.Claims.Count - 1; i >= 0; i--)     {         var claim = claimsIndentity.Claims[i];         if (claim.ClaimType == "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name")             claimsIndentity.Claims.RemoveAt(i);     } } And we are done. Now User.Identity.Name returns the username from user profile and you can use it to show username of current user everywhere in your site. Conclusion Mixing AppFabric Access Control Service and Windows Identity Foundation with custom authorization logic is not impossible but a little bit tricky. This posting finishes my little series about AppFabric ACS and WIF for this time and hopefully you found some useful tricks, tips, hacks and code pieces you can use in your own applications.

    Read the article

  • Globally Handling Request Validation In ASP.NET MVC

    - by imran_ku07
       Introduction:           Cross Site Scripting(XSS) and Cross-Site Request Forgery (CSRF) attacks are one of dangerous attacks on web.  They are among the most famous security issues affecting web applications. OWASP regards XSS is the number one security issue on the Web. Both ASP.NET Web Forms and ASP.NET MVC paid very much attention to make applications build with ASP.NET as secure as possible. So by default they will throw an exception 'A potentially dangerous XXX value was detected from the client', when they see, < followed by an exclamation(like <!) or < followed by the letters a through z(like <s) or & followed by a pound sign(like &#123) as a part of querystring, posted form and cookie collection. This is good for lot of applications. But this is not always the case. Many applications need to allow users to enter html tags, for example applications which uses  Rich Text Editor. You can allow user to enter these tags by just setting validateRequest="false" in your Web.config application configuration file inside <pages> element if you are using Web Form. This will globally disable request validation. But in ASP.NET MVC request handling is different than ASP.NET Web Form. Therefore for disabling request validation globally in ASP.NET MVC you have to put ValidateInputAttribute in your every controller. This become pain full for you if you have hundred of controllers. Therefore in this article i will present a very simple way to handle request validation globally through web.config.   Description:           Before starting how to do this it is worth to see why validateRequest in Page directive and web.config not work in ASP.NET MVC. Actually request handling in ASP.NET Web Form and ASP.NET MVC is different. In Web Form mostly the HttpHandler is the page handler which checks the posted form, query string and cookie collection during the Page ProcessRequest method, while in MVC request validation occur when ActionInvoker calling the action. Just see the stack trace of both framework.   ASP.NET MVC Stack Trace:     System.Web.HttpRequest.ValidateString(String s, String valueName, String collectionName) +8723114   System.Web.HttpRequest.ValidateNameValueCollection(NameValueCollection nvc, String collectionName) +111   System.Web.HttpRequest.get_Form() +129   System.Web.HttpRequestWrapper.get_Form() +11   System.Web.Mvc.ValueProviderDictionary.PopulateDictionary() +145   System.Web.Mvc.ValueProviderDictionary..ctor(ControllerContext controllerContext) +74   System.Web.Mvc.ControllerBase.get_ValueProvider() +31   System.Web.Mvc.ControllerActionInvoker.GetParameterValue(ControllerContext controllerContext, ParameterDescriptor parameterDescriptor) +53   System.Web.Mvc.ControllerActionInvoker.GetParameterValues(ControllerContext controllerContext, ActionDescriptor actionDescriptor) +109   System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +399   System.Web.Mvc.Controller.ExecuteCore() +126   System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) +27   ASP.NET Web Form Stack Trace:    System.Web.HttpRequest.ValidateString(String s, String valueName, String collectionName) +3213202   System.Web.HttpRequest.ValidateNameValueCollection(NameValueCollection nvc, String collectionName) +108   System.Web.HttpRequest.get_QueryString() +119   System.Web.UI.Page.GetCollectionBasedOnMethod(Boolean dontReturnNull) +2022776   System.Web.UI.Page.DeterminePostBackMode() +60   System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +6953   System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +154   System.Web.UI.Page.ProcessRequest() +86                        Since the first responder of request in ASP.NET MVC is the controller action therefore it will check the posted values during calling the action. That's why web.config's requestValidate not work in ASP.NET MVC.            So let's see how to handle this globally in ASP.NET MVC. First of all you need to add an appSettings in web.config. <appSettings>    <add key="validateRequest" value="true"/>  </appSettings>              I am using the same key used in disable request validation in Web Form. Next just create a new ControllerFactory by derving the class from DefaultControllerFactory.     public class MyAppControllerFactory : DefaultControllerFactory    {        protected override IController GetControllerInstance(Type controllerType)        {            var controller = base.GetControllerInstance(controllerType);            string validateRequest=System.Configuration.ConfigurationManager.AppSettings["validateRequest"];            bool b;            if (validateRequest != null && bool.TryParse(validateRequest,out b))                ((ControllerBase)controller).ValidateRequest = bool.Parse(validateRequest);            return controller;        }    }                         Next just register your controller factory in global.asax.        protected void Application_Start()        {            //............................................................................................            ControllerBuilder.Current.SetControllerFactory(new MyAppControllerFactory());        }              This will prevent the above exception to occur in the context of ASP.NET MVC. But if you are using the Default WebFormViewEngine then you need also to set validateRequest="false" in your web.config file inside <pages> element            Now when you run your application you see the effect of validateRequest appsetting. One thing also note that the ValidateInputAttribute placed inside action or controller will always override this setting.    Summary:          Request validation is great security feature in ASP.NET but some times there is a need to disable this entirely. So in this article i just showed you how to disable this globally in ASP.NET MVC. I also explained the difference between request validation in Web Form and ASP.NET MVC. Hopefully you will enjoy this.

    Read the article

  • Pro ASP.NET MVC Framework Review

    - by Ben Griswold
    Early in my career, when I wanted to learn a new technology, I’d sit in the bookstore aisle and I’d work my way through each of the available books on the given subject.  Put in enough time in a bookstore and you can learn just about anything. I used to really enjoy my time in the bookstore – but times have certainly changed.  Whereas books used to be the only place I could find solutions to my problems, now they may be the very last place I look.  I have been working with the ASP.NET MVC Framework for more than a year.  I have a few projects and a couple of major deployments under my belt and I was able to get up to speed with the framework without reading a single book*.  With so many resources at our fingertips (podcasts, screencasts, blogs, stackoverflow, open source projects, www.asp.net, you name it) why bother with a book? Well, I flipped through Steven Sanderson’s Pro ASP.NET MVC Framework a few months ago. And since it is prominently displayed in my co-worker’s office, I tend to pick it up as a reference from time to time.  Last week, I’m not sure why, I decided to read it cover to cover.  Man, did I eat this book up.  Granted, a lot of what I read was review, but it was only review because I had already learned lessons by piecing the puzzle together for myself via various sources. If I were starting with ASP.NET MVC (or ASP.NET Web Deployment in general) today, the first thing I would do is buy Steven Sanderson’s Pro ASP.NET MVC Framework and read it cover to cover. Steven Sanderson did such a great job with this book! As much as I appreciated the in-depth model, view, and controller talk, I was completely impressed with all the extra bits which were included.  There a was nice overview of BDD, view engine comparisons, a chapter dedicated to security and vulnerabilities, IoC, TDD and Mocking (of course), IIS deployment options and a nice overview of what the .NET platform and C# offers.  Heck, Sanderson even include bits about webforms! The book is fantastic and I highly recommend it – even if you think you’ve already got your head around ASP.NET MVC.  By the way, procrastinators may be in luck.  ASP.NET MVC V2 Framework can be pre-ordered.  You might want to jump right into the second edition and find out what Sanderson has to say about MVC 2. * Actually, I did read through the free bits of Professional ASP.NET MVC 1.0.  But it was just a chapter – albeit a really long chapter.

    Read the article

  • RequestValidation Changes in ASP.NET 4.0

    - by Rick Strahl
    There’s been a change in the way the ValidateRequest attribute on WebForms works in ASP.NET 4.0. I noticed this today while updating a post on my WebLog all of which contain raw HTML and so all pretty much trigger request validation. I recently upgraded this app from ASP.NET 2.0 to 4.0 and it’s now failing to update posts. At first this was difficult to track down because of custom error handling in my app – the custom error handler traps the exception and logs it with only basic error information so the full detail of the error was initially hidden. After some more experimentation in development mode the error that occurs is the typical ASP.NET validate request error (‘A potentially dangerous Request.Form value was detetected…’) which looks like this in ASP.NET 4.0: At first when I got this I was real perplexed as I didn’t read the entire error message and because my page does have: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="NewEntry.aspx.cs" Inherits="Westwind.WebLog.NewEntry" MasterPageFile="~/App_Templates/Standard/AdminMaster.master" ValidateRequest="false" EnableEventValidation="false" EnableViewState="false" %> WTF? ValidateRequest would seem like it should be enough, but alas in ASP.NET 4.0 apparently that setting alone is no longer enough. Reading the fine print in the error explains that you need to explicitly set the requestValidationMode for the application back to V2.0 in web.config: <httpRuntime executionTimeout="300" requestValidationMode="2.0" /> Kudos for the ASP.NET team for putting up a nice error message that tells me how to fix this problem, but excuse me why the heck would you change this behavior to require an explicit override to an optional and by default disabled page level switch? You’ve just made a relatively simple fix to a solution a nasty morass of hard to discover configuration settings??? The original way this worked was perfectly discoverable via attributes in the page. Now you can set this setting in the page and get completely unexpected behavior and you are required to set what effectively amounts to a backwards compatibility flag in the configuration file. It turns out the real reason for the .config flag is that the request validation behavior has moved from WebForms pipeline down into the entire ASP.NET/IIS request pipeline and is now applied against all requests. Here’s what the breaking changes page from Microsoft says about it: The request validation feature in ASP.NET provides a certain level of default protection against cross-site scripting (XSS) attacks. In previous versions of ASP.NET, request validation was enabled by default. However, it applied only to ASP.NET pages (.aspx files and their class files) and only when those pages were executing. In ASP.NET 4, by default, request validation is enabled for all requests, because it is enabled before the BeginRequest phase of an HTTP request. As a result, request validation applies to requests for all ASP.NET resources, not just .aspx page requests. This includes requests such as Web service calls and custom HTTP handlers. Request validation is also active when custom HTTP modules are reading the contents of an HTTP request. As a result, request validation errors might now occur for requests that previously did not trigger errors. To revert to the behavior of the ASP.NET 2.0 request validation feature, add the following setting in the Web.config file: <httpRuntime requestValidationMode="2.0" /> However, we recommend that you analyze any request validation errors to determine whether existing handlers, modules, or other custom code accesses potentially unsafe HTTP inputs that could be XSS attack vectors. Ok, so ValidateRequest of the form still works as it always has but it’s actually the ASP.NET Event Pipeline, not WebForms that’s throwing the above exception as request validation is applied to every request that hits the pipeline. Creating the runtime override removes the HttpRuntime checking and restores the WebForms only behavior. That fixes my immediate problem but still leaves me wondering especially given the vague wording of the above explanation. One thing that’s missing in the description is above is one important detail: The request validation is applied only to application/x-www-form-urlencoded POST content not to all inbound POST data. When I first read this this freaked me out because it sounds like literally ANY request hitting the pipeline is affected. To make sure this is not really so I created a quick handler: public class Handler1 : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World <hr>" + context.Request.Form.ToString()); } public bool IsReusable { get { return false; } } } and called it with Fiddler by posting some XML to the handler using a default form-urlencoded POST content type: and sure enough – hitting the handler also causes the request validation error and 500 server response. Changing the content type to text/xml effectively fixes the problem however, bypassing the request validation filter so Web Services/AJAX handlers and custom modules/handlers that implement custom protocols aren’t affected as long as they work with special input content types. It also looks that multipart encoding does not trigger event validation of the runtime either so this request also works fine: POST http://rasnote/weblog/handler1.ashx HTTP/1.1 Content-Type: multipart/form-data; boundary=------7cf2a327f01ae User-Agent: West Wind Internet Protocols 5.53 Host: rasnote Content-Length: 40 Pragma: no-cache <xml>asdasd</xml>--------7cf2a327f01ae *That* probably should trigger event validation – since it is a potential HTML form submission, but it doesn’t. New Runtime Feature, Global Scope Only? Ok, so request validation is now a runtime feature but sadly it’s a feature that’s scoped to the ASP.NET Runtime – effective scope to the entire running application/app domain. You can still manually force validation using Request.ValidateInput() which gives you the option to do this in code, but that realistically will only work with the requestValidationMode set to V2.0 as well since the 4.0 mode auto-fires before code ever gets a chance to intercept the call. Given all that, the new setting in ASP.NET 4.0 seems to limit options and makes things more difficult and less flexible. Of course Microsoft gets to say ASP.NET is more secure by default because of it but what good is that if you have to turn off this flag the very first time you need to allow one single request that bypasses request validation??? This is really shortsighted design… <sigh>© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • User Experience Highlights in PeopleSoft and PeopleTools: Direct from Jeff Robbins

    - by mvaughan
    By Kathy Miedema, Oracle Applications User Experience  This is the fifth in a series of blog posts on the user experience (UX) highlights in various Oracle product families. The last posted interview was with Nadia Bendjedou, Senior Director, Product Strategy on upcoming Oracle E-Business Suite user experience highlights. You’ll see themes around productivity and efficiency, and get an early look at the latest mobile offerings coming through these product lines. Today’s post is on the user experience in PeopleSoft and PeopleTools. To learn more about what’s ahead, attend PeopleSoft or PeopleTools OpenWorld presentations.This interview is with Jeff Robbins, Senior Director, PeopleSoft Development. Jeff Robbins Q: How would you describe the vision you have for the user experience of PeopleSoft?A: Intuitive – Specifically, customers use PeopleSoft to help their employees do their day-to-day work, and the UI (user interface) has been helpful and assistive in that effort. If it’s not obvious what they need to do a task, then the UI isn’t working. So the application needs to make it simple for users to find information they need, complete a task, do all the things they are responsible for, and it really helps when the UI just makes sense. Productive – PeopleSoft is a tool used to support people to do their work, and a lot of users are measured by how much work they’re able to get done per hour, per day, etc. The UI needs to help them be as productive as possible, and can’t make them waste time or energy. The UI needs to reflect the type of work necessary for a task -- if it's data entry, the UI needs to assist the user to get information into the system. For analysts, the UI needs help users assess or analyze information in a particular way. Innovative – The concept of the UI being innovative is something we’ve been working on for years. It’s not just that we want to be seen as innovative, the fact is that companies are asking their employees to do more than they’ve ever asked before. More often companies want to roll out processes as employee or manager self-service, where an employee is responsible to review and maintain their own data. So we’ve had to reinvent, and ask,  “How can we modify the ways an employee interacts with our applications so that they can be more productive and efficient – even with tasks that are entirely unfamiliar?”  Our focus on innovation has forced us to design new ways for users to interact with the entire application.Q: How are the UX features you have delivered so far resonating with customers?  A: Resonating very well. We’re hearing tremendous responses from users, managers, decision-makers -- who are very happy with the improved user experience. Many of the individual features resonate well. Some have really hit home, others are better than they used to be but show us that there’s still room for improvement.A couple innovations really stand out; features that have a significant effect on how users interact with PeopleSoft.First, the deployment of PeopleSoft in a way that’s more like a consumer website with the PeopleSoft Home page and Dashboards.  This new approach is very web-centric, where users feel they’re coming to a website rather than logging into an enterprise application.  There’s lots of information from all around the organization collected in a way that feels very familiar to users. In order to do your job, you can come to this web site rather than having to learn how to log into an application and figure out a complicated menu. Companies can host these really rich web sites for employees that are home pages for accessing critical tasks and information. The UI elements of incorporating search into the whole navigation process is another hit. Rather than having to log in and choose a task from a menu, users come to the web site and begin a task by simply searching for data: themselves, another employee, a customer record, whatever.  The search results include the data along with a set of actions the user might take, completely eliminating the need to hunt through a complicated system menu. Search-centric navigation is really sitting well with customers who are trying to deploy an intuitive set of systems. Q: Are any UX highlights more popular than you expected them to be?  A: We introduced a feature called Pivot Grid in the last release, which is a combination of an interactive grid, like an Excel Pivot Table, along with a dynamic visual chart that automatically graphs the data. I wasn’t certain at first how extensively this would be used. It looked like an innovative tool, but it wasn’t clear how it would be incorporated in business process applications. The fact is that everyone who sees Pivot Grids is thrilled with that kind of interactivity.  It reflects the amount of analytical thinking customers are asking employees to do. Employees can’t just enter data any more. They must interact with it, analyze it, and make decisions. Pivot Grids fit into this way of working. Q: What can you tell us about PeopleSoft’s mobile offerings?A: A lot of customers are finding that mobile is the chief priority in their organization.  They tell us they want their employees to be able to access company information from their mobile devices.  Of course, not everyone has the same requirements, so we’re working to make sure we can help our customers accomplish what they’re trying to do.  We’ve already delivered a number of mobile features.  For instance, PeopleSoft home pages, dashboards and workcenters all work well on an iPad, straight out of the box.  We’ve delivered a number of key functions and tasks for mobile workers – those who are responsible for using a mobile device to manage inventory, for example.  Customers tell us they also need a holistic strategy, one that allows their employees to access nearly every task from a mobile device.  While we don’t expect users to do extensive data entry from their smartphone, it makes sense that they have access to company information and systems while away from their desk.  That’s where our strategy is going now.  We plan to unveil a number of new mobile offerings at OpenWorld.  Some will be available then, some shortly after. Q: What else are you working on now that you think is going to be exciting to customers at Oracle OpenWorld?A: Our next release -- the big thing is PeopleSoft 9.2, and we’ll be talking about the huge amount of work that’s gone into the next versions. A new toolset, 8.53, will be coming, and there’s a lot to talk about there, and the next generation of PeopleSoft 9.2.  We have a ton of new stuff coming.Q: What do you want PeopleSoft customers to know? A: We have been focusing on the user experience in PeopleSoft as a very high priority for the last 4 years, and it’s had interesting effects. One thing is that the application is better, more usable.  We’ve made visible improvements. Another aspect is that in customers’ minds, the PeopleSoft brand is being reinvigorated. Customers invested in PeopleSoft years ago, and then they weren’t sure where PeopleSoft was going.  This investment in the UI and overall user experience keeps PeopleSoft current, innovative and fresh.  Customers  are able to take advantage of a lot of new features, even on the older applications, simply by upgrading their PeopleTools. The interest in that ability has been tremendous. Knowing they have a lot of these features available -- right now, that’s pretty huge. There’s been a tremendous amount of positive response, just on the fact that we’re focusing on the user experience. Editor’s note: For more on PeopleSoft and PeopleTools user experience highlights, visit the Usable Apps web site.To find out more about these enhancements at Openworld, be sure to check out these sessions: GEN8928     General Session: PeopleSoft Update and Product RoadmapCON9183     PeopleSoft PeopleTools Technology Roadmap CON8932     New Functional PeopleSoft PeopleTools Capabilities for the Line-of-Business UserCON9196     PeopleSoft PeopleTools Roadmap: Mobile ApplicationsCON9186     Case Study: Delivering a Groundbreaking User Interface with PeopleSoft PeopleTools

    Read the article

  • Back home :-)

    - by Mike Dietrich
    Wrote this entry last night in the ICE from Stuttgart to Munich but the conncetion broke: 28.5 hour journey - and close by now. Actually I would have been even closer if our TGV wouldn't have had break problems as soon as we had entered German territory. And you don't want a train which goes up to a speed of 200 mph having issues with its breaks, right? So we missed the connection in Stuttgart but I've catched the last train this night towards Munich. Distance approx 1900 km all together. Usually it takes 2.5 hours with a direct flight with Air Lingus from Munich or a bit more when you'll go through Zurich or Frankfurt. But at least you meet more people and see a bit more from the landscapes passing by :-) Except for the break problem everything worked out well so far (I'm no there finally!). I had 4 hours to change in Paris from Gare de Nord to Gare de l'Est and one thing I really have to point out: the people working for SNCF, the French National Railways, were so organized and helpful, purely amazing. I asked the man at the counter where I had to pick up my prepaid tickets for directions to Gare de l'Est - and after we had a chat about Marlene Dietrich he just grabbed his iPhone, started Google Earth and showed me the way to walk. I pretty sure it's a stupid stereotype that people in Paris or France are so unfriendly to foreigners if they don't speak French. In my past 3 stays or travels to Paris in the past 2 years I had only great experiences. And another thing I really enjoy when being in France: the food!!! The sandwich I had at the train station was packed with yummy goat cheese. And there's always Paul. You might ask yourself: Who the heck is Paul? That's Paul - or actually their website. And at Paul's they serve usually excellent fruit tartes - and this time a nice Gateau Au Chocolate. And very good Cafe Cremé as well :-) That's actually the positive part traveling this way: the food you'll get is much better than the airline food - if your airline still serves something called food ...

    Read the article

  • ASP.NET MVC 3 Hosting :: Rolling with Razor in MVC v3 Preview

    - by mbridge
    Razor is an alternate view engine for asp.net MVC.  It was introduced in the “WebMatrix” tool and has now been released as part of the asp.net MVC 3 preview 1.  Basically, Razor allows us to replace the clunky <% %> syntax with a much cleaner coding model, which integrates very nicely with HTML.  Additionally, it provides some really nice features for master page type scenarios and you don’t lose access to any of the features you are currently familiar with, such as HTML helper methods. First, download and install the ASP.NET MVC Preview 1.  You can find this at http://www.microsoft.com/downloads/details.aspx?FamilyID=cb42f741-8fb1-4f43-a5fa-812096f8d1e8&displaylang=en. Now, follow these steps to create your first asp.net mvc project using Razor: 1. Open Visual Studio 2010 2. Create a new project.  Select File->New->Project (Shift Control N) 3. You will see the list of project types which should look similar to what’s shown:   4. Select “ASP.NET MVC 3 Web Application (Razor).”  Set the application name to RazorTest and the path to c:projectsRazorTest for this tutorial. If you select accidently select ASPX, you will end up with the standard asp.net view engine and template, which isn’t what you want. 5. For this tutorial, and ONLY for this tutorial, select “No, do not create a unit test project.”  In general, you should create and use a unit test project.  Code without unit tests is kind of like diet ice cream.  It just isn’t very good. Now, once we have this done, our brand new project will be created.    In all likelihood, Visual Studio will leave you looking at the “HomeController.cs” class, as shown below: Immediately, you should notice one difference.  The Index action used to look like: public ActionResult Index () { ViewData[“Message”] = “Welcome to ASP.Net MVC!”; Return View(); } While this will still compile and run just fine, ASP.Net MVC 3 has a much nicer way of doing this: public ActionResult Index() { ViewModel.Message = “Welcome to ASP.Net MVC!”; Return View(); } Instead of using ViewData we are using the new ViewModel object, which uses the new dynamic data typing of .Net 4.0 to allow us to express ourselves much more cleanly.  This isn’t a tutorial on ALL of MVC 3, but the ViewModel concept is one we will need as we dig into Razor. What comes in the box? When we create a project using the ASP.Net MVC 3 Template with Razor, we get a standard project setup, just like we did in ASP.NET MVC 2.0 but with some differences.  Instead of seeing “.aspx” view files and “.ascx” files, we see files with the “.cshtml” which is the default razor extension.  Before we discuss the details of a razor file, one thing to keep in mind is that since this is an extremely early preview, intellisense is not currently enabled with the razor view engine.  This is promised as an updated before the final release.  Just like with the aspx view engine, the convention of the folder name for a set of views matching the controller name without the word “Controller” still stands.  Similarly, each action in the controller will usually have a corresponding view file in the appropriate view directory.  Remember, in asp.net MVC, convention over configuration is key to successful development! The initial template organizes views in the following folders, located in the project under Views: - Account – The default account management views used by the Account controller.  Each file represents a distinct view. - Home – Views corresponding to the appropriate actions within the home controller. - Shared – This contains common view objects used by multiple views.  Within here, master pages are stored, as well as partial page views (user controls).  By convention, these partial views are named “_XXXPartial.cshtml” where XXX is the appropriate name, such as _LogonPartial.cshtml.  Additionally, display templates are stored under here. With this in mind, let us take a look at the index.cshtml file under the home view directory.  When you open up index.cshtml you should see 1:   @inherits System.Web.Mvc.WebViewPage 2:  @{ 3:          View.Title = "Home Page"; 4:       LayoutPage = "~/Views/Shared/_Layout.cshtml"; 5:   } 6:  <h2>@View.Message</h2> 7:  <p> 8:     To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC     9:    Website">http://asp.net/mvc</a>. 10:  </p> So looking through this, we observe the following facts: Line 1 imports the base page that all views (using Razor) are based on, which is System.Web.Mvc.WebViewPage.  Note that this is different than System.Web.MVC.ViewPage which is used by asp.net MVC 2.0 Also note that instead of the <% %> syntax, we use the very simple ‘@’ sign.  The View Engine contains enough context sensitive logic that it can even distinguish between @ in code and @ in an email.  It’s a very clean markup.  Line 2 introduces the idea of a code block in razor.  A code block is a scoping mechanism just like it is in a normal C# class.  It is designated by @{… }  and any C# code can be placed in between.  Note that this is all server side code just like it is when using the aspx engine and <% %>.  Line 3 allows us to set the page title in the client page’s file.  This is a new feature which I’ll talk more about when we get to master pages, but it is another of the nice things razor brings to asp.net mvc development. Line 4 is where we specify our “master” page, but as you can see, you can place it almost anywhere you want, because you tell it where it is located.  A Layout Page is similar to a master page, but it gains a bit when it comes to flexibility.  Again, we’ll come back to this in a later installment.  Line 6 and beyond is where we display the contents of our view.  No more using <%: %> intermixed with code.  Instead, we get to use very clean syntax such as @View.Message.  This is a lot easier to read than <%:@View.Message%> especially when intermixed with html.  For example: <p> My name is @View.Name and I live at @View.Address </p> Compare this to the equivalent using the aspx view engine <p> My name is <%:View.Name %> and I live at <%: View.Address %> </p> While not an earth shaking simplification, it is easier on the eyes.  As  we explore other features, this clean markup will become more and more valuable.

    Read the article

  • How can I troubleshoot flash player/hardware conflict?

    - by sparthikas
    OBJECTIVE: Have a web browser on my Ubuntu install that can play youtube and hulu videos. Also would like to understand problem so that I can fix it again if I change software. Workarounds welcome, technical understanding and solution preferable. SYMPTOMS: Flash does not run - cannot make the right-click menu appear, an empty box is where video should be, changes to black box when hovering over other links. The "The Adobe Flash plugin has crashed" message does not appear with its sad lego face. cannot activate proprietary graphics driver - causes system to reboot to a prompt. SOLUTIONS TRIED: Replaced OS (tried slackware 13.37, fedora 17, linuxmint 13 maya, gentoo, lubuntu, and even winXP. lubuntu confirmed to work, don't remember how much tweaking, if any, this required. Slack, fedora, mint, and gentoo all failed to run flash just like ubuntu) many reinstalls of flash player via different methods, including cleaning up old installs first, also tried gnash and lightspark. replaced graphics card (replaced HIS IceQ Radeon HD 4670 AGP with older GeForce 5700 LE no change in problem) flash does successfully work on winXP installation with Catalyst AGP hotfix driver applied, however I consider windows wholly unacceptable for web browsing due to lack of security. Lubuntu install also works, however I do not want to be tied down to just using Lubuntu on this computer. SYSINFO: Have latest versions of Ubuntu, Firefox, and Flash on fresh Ubuntu install. Using Gigabyte 7s748 motherboard with Athlon XP 2800+ and 3 GB of RAM with Radeon HD 4670 AGP card, also a Dell soundblaster live series sound card (due to malfunction of onboard sound on motherboard) Wired internet connection, Maxtor 6Y120L3 HDD, Sony DVD RW AW-Q170A, Dell M993s monitor. NOTES: I do not know if the graphics driver issue and the flash troubles are linked. Substitution of older graphics card having same flash troubles seems to suggest they aren't. My troubleshooting method is rather reductionist, consisting mainly of "replace things until you find out what was causing the error by process of elimination" only it seems that this must be a conflict which arises when software decides how to configure itself on my hardware. That is, I know the hardware can run Flash, and I know that on other systems the same software can too, but somehow the combination fails. Consequently I feel out of my depth. I will keep trying things off and on, but I have spent probably about 30 man-hours in the last 4 months working on this problem with no joy other than the lubuntu workaround. Any help will be appreciated, I will be checking back and posting updates. Any pertinent questions regarding me or my computer will be answered, outputs from config files can be accessed and posted (IDK which ones or what parts to post).

    Read the article

  • How Microsoft listens

    - by Stacy Vicknair
    This being my freshman year as an MVP, I had a realization that I perhaps should be embarrassed hasn’t happened sooner. The realization comes much like the iconic M&Ms commercial where the M&Ms run into Santa and exclaim, “He does exist!” My personal realization arguably has a greater implication: Microsoft does listen. This is the most important lesson that I received this year attending the MVP Summit. My hope is that I can convince you that we are empowered to make a difference. Instead of using “Man I hate how this works / doesn’t work!” as cooler conversation, we can use it as true interaction with Microsoft. We as customers to Microsoft need to stop asking the question “Will this work for me?” and instead ask “How can this work for me?” There are three quick resources that the average developer has access to today that they can use to be heard by the product teams, and by no means should you think twice if you have a concern that you’d like a real response on. MVPs MVPs are members of your community who have a deep relationship with Microsoft and will have connections to their associated product group. Don’t think of them as just a resource for answers, but also as your ambassador for getting your experiences heard. You can find your local MVPs by browsing the directory at: https://mvp.support.microsoft.com/communities/mvp.aspx Evangelists Evangelists are employees of Microsoft who work to foster and grow communities in their assigned region. They are first-class citizens of Microsoft and are often deeply involved with the product groups. As a result, they will be more than glad to direct your questions or concerns to those who can answer them most expertly. With that said, evangelists are also very busy people (who do amazing things for the community) and might not be able to get you that conversation as quickly as a local MVP. You can find your local evangelist at the following website: http://msdn.microsoft.com/en-us/bb905078.aspx Microsoft Connect This is one of the resources that I haven’t used enough, but it cannot be understated. Connect is the starting point of the social conversation that happens between Microsoft and the community daily. Connect acts as a portal where you can provide new feedback as well as comment and rate the feedback provided by others. Power is in numbers when it comes to Connect, so the exposure that your feedback can get not only lets you know that you aren’t the only one who wants change, but also lets Microsoft know the same. https://connect.microsoft.com   Technorati Tags: Microsoft,MVP,Feedback,Connect

    Read the article

  • JCP.Next - Early Adopters of JCP 2.8

    - by Heather VanCura
    JCP.Next is a series of three JSRs (JSR 348, JSR 355 and JSR 358), to be defined through the JCP process itself, with the JCP Executive Committee serving as the Expert Group. The proposed JSRs will modify the JCP's processes  - the Process Document and Java Specification Participation Agreement (JSPA) and will apply to all new JSRs for all Java platforms.   The first - JCP.next.1, or more formally JSR 348, Towards a new version of the Java Community Process - was completed and put into effect in October 2011 as JCP 2.8. This focused on a small number of simple but important changes to make our process more transparent and to enable broader participation. We're already seeing the benefits of these changes as new and existing JSRs adopt the new requirements. The second - JSR 355, Executive Committee Merge, is also Final. You can read the JCP 2.9 Process Document .  As part of the JSR 355 Final Release, the JCP Executive Committee published revisions to the JCP Process Document (version 2.9) and the EC Standing Rules (version 2.2).  The changes went into effect following the 2012 EC Elections in November. The third JSR 358, A major revision of the Java Community Process was submitted in June 2012.  This JSR will modify the Java Specification Participation Agreement (JSPA) as well as the Process Document, and will tackle a large number of complex issues, many of them postponed from JSR 348. For these reasons, the JCP EC (acting as the Expert Group for this JSR), expects to spend a considerable amount of time working on. The JSPA is defined by the JCP as "a one-year, renewable agreement between the Member and Oracle. The success of the Java community depends upon an open and transparent JCP program.  JSR 358, A major revision of the Java Community Process, is now in process and can be followed on java.net. The following JSRs and Spec Leads were the early adopters of JCP 2.8, who voluntarily migrated their JSRs from JCP 2.x to JCP 2.8 or above.  More candidates for 2012 JCP Star Spec Leads! JSR 236, Concurrency Utilities for Java EE (Anthony Lai/Oracle), migrated April 2012 JSR 308, Annotations on Java Types (Michael Ernst, Alex Buckley/Oracle), migrated September 2012 JSR 335, Lambda Expressions for the Java Programming Language (Brian Goetz/Oracle), migrated October 2012 JSR 337, Java SE 8 Release Contents (Mark Reinhold/Oracle) – EG Formation, migrated September 2012 JSR 338, Java Persistence 2.1 (Linda DeMichiel/Oracle), migrated January 2012 JSR 339, JAX-RS 2.0: The Java API for RESTful Web Services (Santiago Pericas-Geertsen, Marek Potociar/Oracle), migrated July 2012 JSR 340, Java Servlet 3.1 Specification (Shing Wai Chan, Rajiv Mordani/Oracle), migrated August 2012 JSR 341, Expression Language 3.0 (Kin-man Chung/Oracle), migrated August 2012 JSR 343, Java Message Service 2.0 (Nigel Deakin/Oracle), migrated March 2012 JSR 344, JavaServer Faces 2.2 (Ed Burns/Oracle), migrated September 2012 JSR 345, Enterprise JavaBeans 3.2 (Marina Vatkina/Oracle), migrated February 2012 JSR 346, Contexts and Dependency Injection for Java EE 1.1 (Pete Muir/RedHat) – migrated December 2011

    Read the article

  • The Loneliest Road in America and the OTN Garage

    - by rickramsey
    Source I never told anyone how the image of the OTN Garage on Facebook came to be. I took the Facebook picture on Route 50 in Nevada, USA, in October of 2010. I was riding from Colorado to Oracle OpenWorld in San Francisco, so it was probably October. Route 50 is known as "The Loneliest Road in America." There are roads across Nevada that have even LESS traffic, but Route 50 still one. desolate. road. Although I have seen stranger things while riding along Nevada's Extraterrestrial Highway, I still run across notable oddities every time I ride Route 50. Like the old man with a bandolero of water bottles jogging along the side of the highway in the middle of the day, 50 miles from the closest town. First ultra-marathoner I'd seen in action. He waved at me. Or the dozen Corvettes with California license plates driving toward me, all doing the speed limit in the middle of nowhere because they were being tailed by half a dozen Nevada state troopers. #fail. I don't remember which town I was in, but I noticed the building when I stopped at the gas station. While standing there pouring fuel into the Harley, the store caught my eye. So I pulled the bike in front and walked inside. The owner is a little old lady, about 100 years old. Most of the goods she had on the shelves looked like they had been placed there during WWII. She was itty bitty and could barely see over the counter, but she was so happy when I bought a bar of Hershey's chocolate that she gave me a five cent discount. I took a few pictures and, when I got back, Kemer Thomson, who sometimes blogs here, photoshopped the OTN Garage and Oil Change signs onto it. The bike is a 2009 Road King Classic with a Bob Dron fairing and a Corbin heated seat. The seat came in handy when I rode home over Tioga Pass. The Road King is a very comfy touring bike with a great Harley rumble. I'm kinda sorry I sold it. When I stopped for fuel about 75 miles down the road at the next town, I peeled back the chocolate bar. I had turned into powder. Probably 50 years ago. - Rick Website Newsletter Facebook Twitter

    Read the article

  • Can't install graphic drivers in 12.04

    - by yinon
    The driver is ATI/AMD proprietary FGLRX graphics driver. After clicking Activate, it asks for my password and starts downloading. Then it shows an error message: 2012-10-03 16:16:04,227 DEBUG: updating <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> 2012-10-03 16:16:06,172 DEBUG: reading modalias file /lib/modules/3.2.0-29-generic-pae/modules.alias 2012-10-03 16:16:06,383 DEBUG: reading modalias file /usr/share/jockey/modaliases/b43 2012-10-03 16:16:06,386 DEBUG: reading modalias file /usr/share/jockey/modaliases/disable-upstream-nvidia 2012-10-03 16:16:06,456 DEBUG: loading custom handler /usr/share/jockey/handlers/pvr-omap4.py 2012-10-03 16:16:06,506 WARNING: modinfo for module omapdrm_pvr failed: ERROR: modinfo: could not find module omapdrm_pvr 2012-10-03 16:16:06,509 DEBUG: Instantiated Handler subclass __builtin__.PVROmap4Driver from name PVROmap4Driver 2012-10-03 16:16:06,682 DEBUG: PowerVR SGX proprietary graphics driver for OMAP 4 not available 2012-10-03 16:16:06,682 DEBUG: loading custom handler /usr/share/jockey/handlers/cdv.py 2012-10-03 16:16:06,727 WARNING: modinfo for module cedarview_gfx failed: ERROR: modinfo: could not find module cedarview_gfx 2012-10-03 16:16:06,728 DEBUG: Instantiated Handler subclass __builtin__.CdvDriver from name CdvDriver 2012-10-03 16:16:06,728 DEBUG: cdv.available: falling back to default 2012-10-03 16:16:06,772 DEBUG: Intel Cedarview graphics driver availability undetermined, adding to pool 2012-10-03 16:16:06,772 DEBUG: loading custom handler /usr/share/jockey/handlers/vmware-client.py 2012-10-03 16:16:06,781 WARNING: modinfo for module vmxnet failed: ERROR: modinfo: could not find module vmxnet 2012-10-03 16:16:06,781 DEBUG: Instantiated Handler subclass __builtin__.VmwareClientHandler from name VmwareClientHandler 2012-10-03 16:16:06,795 DEBUG: VMWare Client Tools availability undetermined, adding to pool 2012-10-03 16:16:06,796 DEBUG: loading custom handler /usr/share/jockey/handlers/fglrx.py 2012-10-03 16:16:06,801 WARNING: modinfo for module fglrx_updates failed: ERROR: modinfo: could not find module fglrx_updates 2012-10-03 16:16:06,805 DEBUG: Instantiated Handler subclass __builtin__.FglrxDriverUpdate from name FglrxDriverUpdate 2012-10-03 16:16:06,805 DEBUG: fglrx.available: falling back to default 2012-10-03 16:16:06,833 DEBUG: ATI/AMD proprietary FGLRX graphics driver (post-release updates) availability undetermined, adding to pool 2012-10-03 16:16:06,836 WARNING: modinfo for module fglrx failed: ERROR: modinfo: could not find module fglrx 2012-10-03 16:16:06,840 DEBUG: Instantiated Handler subclass __builtin__.FglrxDriver from name FglrxDriver 2012-10-03 16:16:06,840 DEBUG: fglrx.available: falling back to default 2012-10-03 16:16:06,873 DEBUG: ATI/AMD proprietary FGLRX graphics driver availability undetermined, adding to pool 2012-10-03 16:16:06,873 DEBUG: loading custom handler /usr/share/jockey/handlers/dvb_usb_firmware.py 2012-10-03 16:16:06,925 DEBUG: Instantiated Handler subclass __builtin__.DvbUsbFirmwareHandler from name DvbUsbFirmwareHandler 2012-10-03 16:16:06,926 DEBUG: Firmware for DVB cards not available 2012-10-03 16:16:06,926 DEBUG: loading custom handler /usr/share/jockey/handlers/nvidia.py 2012-10-03 16:16:06,961 WARNING: modinfo for module nvidia_96 failed: ERROR: modinfo: could not find module nvidia_96 2012-10-03 16:16:06,967 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriver96 from name NvidiaDriver96 2012-10-03 16:16:06,968 DEBUG: nvidia.available: falling back to default 2012-10-03 16:16:06,980 DEBUG: XorgDriverHandler(nvidia_96, nvidia-96, None): Disabling as package video ABI xorg-video-abi-10 does not match X.org video ABI xorg-video-abi-11 2012-10-03 16:16:06,980 DEBUG: NVIDIA accelerated graphics driver not available 2012-10-03 16:16:06,983 WARNING: modinfo for module nvidia_current failed: ERROR: modinfo: could not find module nvidia_current 2012-10-03 16:16:06,987 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriverCurrent from name NvidiaDriverCurrent 2012-10-03 16:16:06,987 DEBUG: nvidia.available: falling back to default 2012-10-03 16:16:07,015 DEBUG: NVIDIA accelerated graphics driver availability undetermined, adding to pool 2012-10-03 16:16:07,018 WARNING: modinfo for module nvidia_current_updates failed: ERROR: modinfo: could not find module nvidia_current_updates 2012-10-03 16:16:07,021 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriverCurrentUpdates from name NvidiaDriverCurrentUpdates 2012-10-03 16:16:07,022 DEBUG: nvidia.available: falling back to default 2012-10-03 16:16:07,066 DEBUG: NVIDIA accelerated graphics driver (post-release updates) availability undetermined, adding to pool 2012-10-03 16:16:07,069 WARNING: modinfo for module nvidia_173_updates failed: ERROR: modinfo: could not find module nvidia_173_updates 2012-10-03 16:16:07,072 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriver173Updates from name NvidiaDriver173Updates 2012-10-03 16:16:07,073 DEBUG: nvidia.available: falling back to default 2012-10-03 16:16:07,105 DEBUG: NVIDIA accelerated graphics driver (post-release updates) availability undetermined, adding to pool 2012-10-03 16:16:07,112 WARNING: modinfo for module nvidia_173 failed: ERROR: modinfo: could not find module nvidia_173 2012-10-03 16:16:07,118 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriver173 from name NvidiaDriver173 2012-10-03 16:16:07,119 DEBUG: nvidia.available: falling back to default 2012-10-03 16:16:07,159 DEBUG: NVIDIA accelerated graphics driver availability undetermined, adding to pool 2012-10-03 16:16:07,166 WARNING: modinfo for module nvidia_96_updates failed: ERROR: modinfo: could not find module nvidia_96_updates 2012-10-03 16:16:07,171 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriver96Updates from name NvidiaDriver96Updates 2012-10-03 16:16:07,171 DEBUG: nvidia.available: falling back to default 2012-10-03 16:16:07,188 DEBUG: XorgDriverHandler(nvidia_96_updates, nvidia-96-updates, None): Disabling as package video ABI xorg-video-abi-10 does not match X.org video ABI xorg-video-abi-11 2012-10-03 16:16:07,188 DEBUG: NVIDIA accelerated graphics driver (post-release updates) not available 2012-10-03 16:16:07,188 DEBUG: loading custom handler /usr/share/jockey/handlers/madwifi.py 2012-10-03 16:16:07,195 WARNING: modinfo for module ath_pci failed: ERROR: modinfo: could not find module ath_pci 2012-10-03 16:16:07,195 DEBUG: Instantiated Handler subclass __builtin__.MadwifiHandler from name MadwifiHandler 2012-10-03 16:16:07,196 DEBUG: Alternate Atheros "madwifi" driver availability undetermined, adding to pool 2012-10-03 16:16:07,196 DEBUG: loading custom handler /usr/share/jockey/handlers/sl_modem.py 2012-10-03 16:16:07,213 DEBUG: Instantiated Handler subclass __builtin__.SlModem from name SlModem 2012-10-03 16:16:07,234 DEBUG: Software modem not available 2012-10-03 16:16:07,234 DEBUG: loading custom handler /usr/share/jockey/handlers/broadcom_wl.py 2012-10-03 16:16:07,239 WARNING: modinfo for module wl failed: ERROR: modinfo: could not find module wl 2012-10-03 16:16:07,277 DEBUG: Instantiated Handler subclass __builtin__.BroadcomWLHandler from name BroadcomWLHandler 2012-10-03 16:16:07,277 DEBUG: Broadcom STA wireless driver availability undetermined, adding to pool 2012-10-03 16:16:07,278 DEBUG: all custom handlers loaded 2012-10-03 16:16:07,278 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'pci:v00008086d000027D8sv00001043sd000082EAbc04sc03i00') 2012-10-03 16:16:07,568 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'snd_hda_intel'} 2012-10-03 16:16:07,699 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'snd_hda_intel', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,699 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'snd_hda_intel'} 2012-10-03 16:16:07,699 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'snd_hda_intel', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,699 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'input:b0000v0000p0000e0000-e0,5,kramlsfw6,') 2012-10-03 16:16:07,704 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'evbug'} 2012-10-03 16:16:07,704 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'evbug', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,704 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'pci:v00008086d000027DAsv00001043sd00008179bc0Csc05i00') 2012-10-03 16:16:07,707 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'i2c_i801'} 2012-10-03 16:16:07,707 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'i2c_i801', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,707 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'acpi:PNP0C01:') 2012-10-03 16:16:07,707 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'acpi:PNP0B00:') 2012-10-03 16:16:07,707 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'pci:v00001969d00001026sv00001043sd00008304bc02sc00i00') 2012-10-03 16:16:07,710 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'atl1e'} 2012-10-03 16:16:07,710 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'atl1e', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,710 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'input:b0003v04F2p0816e0111-e0,1,4,11,14,k71,72,73,74,75,77,79,7A,7B,7C,7D,7E,7F,80,81,82,83,84,85,86,87,88,89,8A,8C,8E,96,98,9E,9F,A1,A3,A4,A5,A6,AD,B0,B1,B2,B3,B4,B7,B8,B9,BA,BB,BC,BD,BE,BF,C0,C1,C2,F0,ram4,l0,1,2,sfw') 2012-10-03 16:16:07,711 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'evbug'} 2012-10-03 16:16:07,711 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'evbug', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,711 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'mac_hid'} 2012-10-03 16:16:07,711 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'mac_hid', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,711 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'platform:pcspkr') 2012-10-03 16:16:07,711 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'pcspkr'} 2012-10-03 16:16:07,711 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'pcspkr', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,712 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'snd_pcsp'} 2012-10-03 16:16:07,712 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'snd_pcsp', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,712 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'usb:v1D6Bp0001d0302dc09dsc00dp00ic09isc00ip00') 2012-10-03 16:16:07,724 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'input:b0019v0000p0001e0000-e0,1,k74,ramlsfw') 2012-10-03 16:16:07,724 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'evbug'} 2012-10-03 16:16:07,724 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'evbug', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,724 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'mac_hid'} 2012-10-03 16:16:07,724 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'mac_hid', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,724 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'acpi:PNP0C04:') 2012-10-03 16:16:07,724 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'platform:eisa') 2012-10-03 16:16:07,725 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'pci:v00008086d000027CCsv00001043sd00008179bc0Csc03i20') 2012-10-03 16:16:07,728 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'platform:Fixed MDIO bus') 2012-10-03 16:16:07,728 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'pci:v00008086d000029C0sv00001043sd000082B0bc06sc00i00') 2012-10-03 16:16:07,731 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'usb:v045Ep0766d0101dcEFdsc02dp01ic01isc01ip00') 2012-10-03 16:16:07,777 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'snd_usb_audio'} 2012-10-03 16:16:07,777 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'snd_usb_audio', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,777 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'acpi:PNP0F03:PNP0F13:') 2012-10-03 16:16:07,777 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'acpi:PNP0000:') 2012-10-03 16:16:07,777 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'pci:v00001002d000095C5sv0000174Bsd0000E400bc03sc00i00') 2012-10-03 16:16:08,072 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'fglrx_updates', 'package': 'fglrx-updates'} 2012-10-03 16:16:08,133 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/i386-linux-gnu/mesa/ld.so.conf other target alt None other current alt None 2012-10-03 16:16:08,134 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:16:08,072 DEBUG: found match in handler pool xorg:fglrx_updates([FglrxDriverUpdate, nonfree, disabled] ATI/AMD proprietary FGLRX graphics driver (post-release updates)) 2012-10-03 16:16:08,136 WARNING: modinfo for module fglrx_updates failed: ERROR: modinfo: could not find module fglrx_updates 2012-10-03 16:16:08,147 DEBUG: fglrx.available: falling back to default 2012-10-03 16:16:08,173 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/i386-linux-gnu/mesa/ld.so.conf other target alt None other current alt None 2012-10-03 16:16:08,173 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:16:08,162 DEBUG: got handler xorg:fglrx_updates([FglrxDriverUpdate, nonfree, disabled] ATI/AMD proprietary FGLRX graphics driver (post-release updates)) 2012-10-03 16:16:08,173 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'fglrx', 'package': 'fglrx'} 2012-10-03 16:16:08,184 DEBUG: fglrx.enabled(fglrx): target_alt None current_alt /usr/lib/i386-linux-gnu/mesa/ld.so.conf other target alt None other current alt None 2012-10-03 16:16:08,184 DEBUG: fglrx is not the alternative in use 2012-10-03 16:16:08,173 DEBUG: found match in handler pool xorg:fglrx([FglrxDriver, nonfree, disabled] ATI/AMD proprietary FGLRX graphics driver) 2012-10-03 16:16:08,187 WARNING: modinfo for module fglrx failed: ERROR: modinfo: could not find module fglrx 2012-10-03 16:16:08,190 DEBUG: fglrx.available: falling back to default 2012-10-03 16:16:08,216 DEBUG: fglrx.enabled(fglrx): target_alt None current_alt /usr/lib/i386-linux-gnu/mesa/ld.so.conf other target alt None other current alt None . . . 2012-10-03 16:18:10,552 DEBUG: install progress initramfs-tools 62.500000 2012-10-03 16:18:22,249 DEBUG: install progress libc-bin 62.500000 2012-10-03 16:18:23,251 DEBUG: Selecting previously unselected package dkms. (Reading database ... 142496 files and directories currently installed.) Unpacking dkms (from .../dkms_2.2.0.3-1ubuntu3_all.deb) ... Selecting previously unselected package fakeroot. Unpacking fakeroot (from .../fakeroot_1.18.2-1_i386.deb) ... Selecting previously unselected package fglrx-updates. Unpacking fglrx-updates (from .../fglrx-updates_2%3a8.960-0ubuntu1.1_i386.deb) ... Selecting previously unselected package fglrx-amdcccle-updates. Unpacking fglrx-amdcccle-updates (from .../fglrx-amdcccle-updates_2%3a8.960-0ubuntu1.1_i386.deb) ... Processing triggers for man-db ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot dpkg: error processing libxss1 (--configure): package libxss1 is already installed and configured dpkg: error processing chromium-codecs-ffmpeg (--configure): package chromium-codecs-ffmpeg is already installed and configured dpkg: error processing chromium-browser (--configure): package chromium-browser is already installed and configured dpkg: error processing chromium-browser-l10n (--configure): package chromium-browser-l10n is already installed and configured Setting up dkms (2.2.0.3-1ubuntu3) ... No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already Setting up fakeroot (1.18.2-1) ... update-alternatives: using /usr/bin/fakeroot-sysv to provide /usr/bin/fakeroot (fakeroot) in auto mode. Setting up fglrx-updates (2:8.960-0ubuntu1.1) ... update-alternatives: using /usr/lib/fglrx/ld.so.conf to provide /etc/ld.so.conf.d/i386-linux-gnu_GL.conf (i386-linux-gnu_gl_conf) in auto mode. update-alternatives: warning: skip creation of /etc/OpenCL/vendors/amdocl64.icd because associated file /usr/lib/fglrx/etc/OpenCL/vendors/amdocl64.icd (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalcl.so because associated file /usr/lib32/fglrx/libaticalcl.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalrt.so because associated file /usr/lib32/fglrx/libaticalrt.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: using /usr/lib/fglrx/alt_ld.so.conf to provide /etc/ld.so.conf.d/x86_64-linux-gnu_GL.conf (x86_64-linux-gnu_gl_conf) in auto mode. update-initramfs: deferring update (trigger activated) Loading new fglrx-updates-8.960 DKMS files... First Installation: checking all kernels... Building only for 3.2.0-29-generic-pae Building for architecture i686 Building initial module for 3.2.0-29-generic-pae Done. fglrx_updates: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/3.2.0-29-generic-pae/updates/dkms/ depmod...... DKMS: install completed. update-initramfs: deferring update (trigger activated) Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Setting up fglrx-amdcccle-updates (2:8.960-0ubuntu1.1) ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-29-generic-pae Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: libxss1 chromium-codecs-ffmpeg chromium-browser chromium-browser-l10n Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) 2012-10-03 16:18:23,256 ERROR: Package failed to install: Selecting previously unselected package dkms. (Reading database ... 142496 files and directories currently installed.) Unpacking dkms (from .../dkms_2.2.0.3-1ubuntu3_all.deb) ... Selecting previously unselected package fakeroot. Unpacking fakeroot (from .../fakeroot_1.18.2-1_i386.deb) ... Selecting previously unselected package fglrx-updates. Unpacking fglrx-updates (from .../fglrx-updates_2%3a8.960-0ubuntu1.1_i386.deb) ... Selecting previously unselected package fglrx-amdcccle-updates. Unpacking fglrx-amdcccle-updates (from .../fglrx-amdcccle-updates_2%3a8.960-0ubuntu1.1_i386.deb) ... Processing triggers for man-db ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot dpkg: error processing libxss1 (--configure): package libxss1 is already installed and configured dpkg: error processing chromium-codecs-ffmpeg (--configure): package chromium-codecs-ffmpeg is already installed and configured dpkg: error processing chromium-browser (--configure): package chromium-browser is already installed and configured dpkg: error processing chromium-browser-l10n (--configure): package chromium-browser-l10n is already installed and configured Setting up dkms (2.2.0.3-1ubuntu3) ... No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already Setting up fakeroot (1.18.2-1) ... update-alternatives: using /usr/bin/fakeroot-sysv to provide /usr/bin/fakeroot (fakeroot) in auto mode. Setting up fglrx-updates (2:8.960-0ubuntu1.1) ... update-alternatives: using /usr/lib/fglrx/ld.so.conf to provide /etc/ld.so.conf.d/i386-linux-gnu_GL.conf (i386-linux-gnu_gl_conf) in auto mode. update-alternatives: warning: skip creation of /etc/OpenCL/vendors/amdocl64.icd because associated file /usr/lib/fglrx/etc/OpenCL/vendors/amdocl64.icd (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalcl.so because associated file /usr/lib32/fglrx/libaticalcl.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalrt.so because associated file /usr/lib32/fglrx/libaticalrt.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: using /usr/lib/fglrx/alt_ld.so.conf to provide /etc/ld.so.conf.d/x86_64-linux-gnu_GL.conf (x86_64-linux-gnu_gl_conf) in auto mode. update-initramfs: deferring update (trigger activated) Loading new fglrx-updates-8.960 DKMS files... First Installation: checking all kernels... Building only for 3.2.0-29-generic-pae Building for architecture i686 Building initial module for 3.2.0-29-generic-pae Done. fglrx_updates: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/3.2.0-29-generic-pae/updates/dkms/ depmod...... DKMS: install completed. update-initramfs: deferring update (trigger activated) Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Setting up fglrx-amdcccle-updates (2:8.960-0ubuntu1.1) ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-29-generic-pae Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: libxss1 chromium-codecs-ffmpeg chromium-browser chromium-browser-l10n Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) 2012-10-03 16:18:23,590 WARNING: /sys/module/fglrx_updates/drivers does not exist, cannot rebind fglrx_updates driver 2012-10-03 16:18:43,601 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:43,601 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:43,617 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:43,617 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,143 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,144 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,154 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,154 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,182 DEBUG: fglrx.enabled(fglrx): target_alt /usr/lib/fglrx/ld.so.conf current_alt /usr/lib/fglrx/ld.so.conf other target alt /usr/lib/fglrx/alt_ld.so.conf other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,182 DEBUG: XorgDriverHandler(%s, %s).enabled(): No X.org driver set, not checking 2012-10-03 16:18:54,215 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,215 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,229 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,229 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,268 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,268 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,279 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,279 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,298 DEBUG: fglrx.enabled(fglrx): target_alt /usr/lib/fglrx/ld.so.conf current_alt /usr/lib/fglrx/ld.so.conf other target alt /usr/lib/fglrx/alt_ld.so.conf other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,298 DEBUG: XorgDriverHandler(%s, %s).enabled(): No X.org driver set, not checking 2012-10-03 16:18:57,828 DEBUG: Shutting down I don't know how to troubleshoot from looking at the log file, could somebody assist me with this please? You can download the log file at: https://www.dropbox.com/s/a59d2hyabo02q5z/jockey.log

    Read the article

  • nfs-kernel-server installation : file does not exist

    - by Stuti Rastogi
    I am extremely new to Ubuntu and need to work on EdX platform. I need to install the NFS Client on Ubuntu 12.04 for the same. I used the following stuti@stuti:/$ sudo apt-get install nfs-kernel-server However this gives me an error as follows: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: nfs-common The following NEW packages will be installed: nfs-common nfs-kernel-server 0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/355 kB of archives. After this operation, 1,222 kB of additional disk space will be used. Do you want to continue [Y/n]? y Selecting previously unselected package nfs-common. (Reading database ... 200367 files and directories currently installed.) Unpacking nfs-common (from .../nfs-common_1%3a1.2.5-3ubuntu3.1_i386.deb) ... Selecting previously unselected package nfs-kernel-server. Unpacking nfs-kernel-server (from .../nfs-kernel-server_1%3a1.2.5-3ubuntu3.1_i386.deb) ... Processing triggers for ureadahead ... Processing triggers for man-db ... Setting up nfs-common (1:1.2.5-3ubuntu3.1) ... statd start/running, process 4574 gssd stop/pre-start, process 4603 idmapd start/running, process 4643 Setting up nfs-kernel-server (1:1.2.5-3ubuntu3.1) ... update-rc.d: /etc/init.d/nfs-kernel-server: file does not exist dpkg: error processing nfs-kernel-server (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: nfs-kernel-server E: Sub-process /usr/bin/dpkg returned an error code (1) I have tried sudo apt-get autoremove nfs-kernel-server sudo apt-get autoremove nfs-common After these, I tried to install but I keep getting the same error. apt-get update or upgrade also do not help and give the same error. I am clueless as to where can I find this missing file as stated in the output. I tried to google about this problem but none of the solutions I came across have helped or I have not been able to understand some of them. Any help would really be appreciated. Thanks in advance for your time and attention.

    Read the article

  • XBRL US Conference Highlights

    - by john.orourke(at)oracle.com
    Back in early November I had an opportunity to attend the XBRL US National Conference in Philadelphia.  At the event, XBRL US announced that Oracle had joined the initiative, so I had a chance to participate in a press conference and attend a number of sessions.  Oracle joined XBRL US so we can stay ahead of the standard and leverage it in our products, and to help drive awareness with customers and improve adoption of XBRL. There were roughly 250 attendees at the event, about half of which were vendors and consultants and the rest financial reporting staff from corporate filers.  Event sponsors included Ernst & Young, SWIFT and Fujitsu.  There were also a number of XBRL technology and service providers exhibiting at the conference.  On Monday Nov. 8th, the XBRL US Steering Committee meetings and Annual Members meeting and reception were held.  At the Annual Members meeting the big news was that current XBRL US President, Mark Bolgiano, is moving to a new position at Howard Hughes Medical Center.  Campbell Pryde, who had led the Taxonomy Development for XBRL US, is taking over as XBRL US President. Other items that were highlighted at the members meeting included: The US GAAP XBRL taxonomy is being used by over 1500 SEC filers and has now been handed over to the FASB to maintain and enhance 16 filer training events were held in 2010 XBRL Global Magazine was launched Corporate Actions proposal was submitted to the SEC with SWIFT in May XBRL Labs for iPhone, XBRL US Consistency Suite launched ISO 2022 Corporate Actions Alignment with XBRL achieved The XBRL Credit Rating taxonomy was accepted Tuesday Nov. 9th included Keynotes, General Sessions, Innovation Workshop for Governments and Securities Professionals, and an Opening Reception.  General sessions included: Lessons Learned from the SEC's rollout of XBRL.  More than 18,000 errors were identified in reviews of filings between June 2009 and September 2010.  Most of these related to negative values being used where they shouldn't have.  Also, the SEC feels there are too many taxonomy extensions being created - mostly in the Cash Flow Statements.  They emphasize using existing elements in the US GAAP taxonomy and advise filers not to  create extensions to improve the visual formatting of XBRL filings. Investors and XBRL - Setting the Standard for Data Quality.  In this panel discussion, the key learning was that CFA's, academics and the financial community are not using XBRL as expected.  The issues raised include the  accuracy and completeness of filings, number of taxonomy extensions, and limited number of tools available to help analyze XBRL data.  Another big issue that was raised is the lack of historic results in XBRL - most analysts need 10 quarters of historic data.  On the positive side, XBRL has the potential to eliminate re-keying of data and errors here and can improve analytic capabilities for financial analysts once more historic data is available and more companies are providing detailed tagging of their filings. A US Roadmap for XBRL Financial Reporting.  This was a panel discussion featuring Jeff Neumann(SEC), Campbell Pryde(XBRL US), and Louis Matherne(FASB).  Key points included the fact that XBRL is currently used by 1500 companies, with 8000 more companies coming in 2011.  XBRL for Mutual Fund Reporting will start in 2011 for 8000 funds, and a Credit Rating Taxonomy has now been submitted for review.  The XBRL tagging/filing process is improving each quarter - more education is helping here.  The FASB is looking at extensions to date, and potential additions to US GAAP taxonomy, while the SEC is evaluating filings for accuracy, consistency in tagging, and tools for analyzing data.  The big news is that the FASB 2011 US GAAP Taxonomy has been completed and reviewed by SEC.  The 2011 US GAAP Taxonomy supports new FASB accounting standards issued since 2009, has new taxonomy elements for certain industries (i.e airlines) and the elimination of 500 concepts.  (meaning they can't be used going forward but are still supported for historical comparison)  The 2011 US GAAP Taxonomy will be available for usage with Q2 2011 SEC filings.  More information about this can be found on the FASB web site.  http://www.fasb.org/home Accounting Firms and XBRL.  This session covered the Role of Audit Firms, which includes awareness and education, validation of XBRL filings, and in-house transition planning.  The main advice provided was that organizations should document XBRL mapping process, perform peer comparisons, and risk assessments on a regular basis. Wednesday Nov. 10th included more Keynotes, General Sessions on Corporate Actions, and XBRL Essentials Workshop Training for corporate filers.  The XBRL Essentials Training included: Getting Started Once you Have the Basics Detailed Footnote Tagging and Handling Tables Quality Control and Trust in the XBRL Process Bringing XBRL In-House:  What are the Options, What should you consider? The US GAAP Financial Reporting Taxonomy - Overview of the 2011 release The XBRL Essentials Training was well-attended with about 80 people.  This included a good overview of the SEC's XBRL mandate, limited liability issue, tagging levels, recommended planning process, internal vs. outsourced approach, and how to manage service providers.  I learned a lot from the session on detailed tagging.  This is the requirement that kicks in during a company's second year of XBRL filing with the SEC and applies to financial statements, footnotes and disclosures (it does not apply to MD&A, executive communications and other information).  The review of the Linkbase model, or dimensional table structure, was very interesting and can be complex to understand.  The key takeaway here is that using dimensional tables in XBRL filings can help limit the number of taxonomy extensions that are required.  The slides from this session are posted on the XBRL US web site. (http://xbrl.us/events/Pages/archive.aspx) For me, the main summary points and takeaways from the XBRL US conference are: XBRL for financial reporting has turned the corner and gone mainstream - with 1500 companies currently using it and 8000 more coming in 2011 The expected value is not being achieved by filers or consumers of XBRL data - this will improve when more companies are filing in XBRL, more history is available, and more software tools are available for analysis (hmm, sounds like an opportunity for Oracle) XBRL is becoming the global standard for all business communications beyond just the financials - i.e. adoption for mutual funds, corporate actions and others planned for the future If you would like to learn more about XBRL and the various training programs, services and software tools that are available check out the XBRL US web site and even better - become a member.  Here's a link:  http://xbrl.us/Pages/default.aspx

    Read the article

  • Zukunftsmusik auf der Oracle OpenWorld 2013

    - by Alliances & Channels Redaktion
    "The future begins at Oracle OpenWorld", das Motto weckt große Erwartungen! Wie die Zukunft aussehen könnte, davon konnten sich 60.000 Besucherinnen und Besucher aus 145 Ländern vor Ort in San Francisco selbst überzeugen: In sage und schreibe 2.555 Sessions – verteilt über Downtown San Francisco – ging es dort um Zukunftstechnologien und neue Entwicklungen. Wie soll man zusammenfassen, was insgesamt 3.599 Speaker, fast die Hälfte übrigens Kunden und Partner, in vier Tagen an technologischen Visionen entwickelt und präsentiert haben? Nehmen wir ein konkretes Beispiel, das in diversen Sessions immer wieder auftauchte: Das „Internet of Things“, sprich „intelligente“ Alltagsgegenstände, deren eingebaute Minicomputer ohne den Umweg über einen PC miteinander kommunizieren und auf äußere Einflüsse reagieren. Für viele ist das heute noch Neuland, doch die Weiterentwicklung des Internet of Things eröffnet für Oracle, wie auch für die Partner, ein spannendes Arbeitsfeld und natürlich auch einen neuen Markt. Die omnipräsenten Fokus-Themen der viertägigen größten Hauskonferenz von Oracle hießen in diesem Jahr Customer Experience und Human Capital Management. Spannend für Partner waren auch die Strategien und die Roadmap von Oracle sowie die Neuigkeiten aus den Bereichen Engineered Systems, Cloud Computing, Business Analytics, Big Data und Customer Experience. Neue Rekorde stellte die Oracle OpenWorld auch im Netz auf: Mehr als 2,1 Millionen Menschen besuchten diese Veranstaltung online und nutzten dabei über 224 Social-Media Kanäle – fast doppelt so viele wie noch vor einem Jahr. Die gute Nachricht: Die Oracle OpenWorld bleibt online, denn es besteht nach wie vor die Möglichkeit, OnDemand-Videos der Keynote- und Session-Highlights anzusehen: Gehen Sie einfach auf Conference Video Highlights und wählen Sie aus acht Bereichen entweder eine Zusammenfassung oder die vollständige Keynote beziehungsweise Session. Dort finden Sie auch Videos der eigenen Fach-Konferenzen, die im Umfeld der Oracle OpenWorld stattfanden: die JavaOne, die MySQL Connect und der Oracle PartnerNetwork Exchange. Beim Oracle PartnerNetwork Exchange wurden, ganz auf die Fragen und Bedürfnisse der Oracle Partner zugeschnitten, Themen wie Cloud für Partner, Applications, Engineered Systems und Hardware, Big Data, oder Industry Solutions behandelt, und es gab, ganz wichtig, viel Gelegenheit zu Austausch und Vernetzung. Konkret befassten sich dort beispielsweise Sessions mit Cloudanwendungen im Gesundheitsbereich, mit der Erstellung überzeugender Business Cases für Kundengespräche oder mit Mobile und Social Networking. Die aus Deutschland angereisten über 40 Partner trafen sich beim OPN Exchange zu einem anregenden gemeinsamen Abend mit den anderen Teilnehmern. Dass die Oracle OpenWorld auch noch zum sportlichen Highlight werden würde, kam denkbar unerwartet: Zeitgleich mit der Konferenz wurde nämlich in der Bucht von San Francisco die entscheidende 19. Etappe des Americas Cup ausgetragen. Im traditionsreichen Segelwettbewerb lag Team Oracle USA zunächst mit 1:8 zurück, schaffte es aber dennoch, den Sieg vor dem lange Zeit überlegenen Team Neuseeland zu holen und somit den Titel zu verteidigen. Selbstverständlich fand die Oracle OpenWorld auch ein großes Medienecho. Wir haben eine Auswahl für Sie zusammengestellt: - ChannelPartner- Computerwoche - Heise - Silicon über Big Data - Silicon über 12c

    Read the article

  • Zukunftsmusik auf der Oracle OpenWorld 2013

    - by Alliances & Channels Redaktion
    "The future begins at Oracle OpenWorld", das Motto weckt große Erwartungen! Wie die Zukunft aussehen könnte, davon konnten sich 60.000 Besucherinnen und Besucher aus 145 Ländern vor Ort in San Francisco selbst überzeugen: In sage und schreibe 2.555 Sessions – verteilt über Downtown San Francisco – ging es dort um Zukunftstechnologien und neue Entwicklungen. Wie soll man zusammenfassen, was insgesamt 3.599 Speaker, fast die Hälfte übrigens Kunden und Partner, in vier Tagen an technologischen Visionen entwickelt und präsentiert haben? Nehmen wir ein konkretes Beispiel, das in diversen Sessions immer wieder auftauchte: Das „Internet of Things“, sprich „intelligente“ Alltagsgegenstände, deren eingebaute Minicomputer ohne den Umweg über einen PC miteinander kommunizieren und auf äußere Einflüsse reagieren. Für viele ist das heute noch Neuland, doch die Weiterentwicklung des Internet of Things eröffnet für Oracle, wie auch für die Partner, ein spannendes Arbeitsfeld und natürlich auch einen neuen Markt. Die omnipräsenten Fokus-Themen der viertägigen größten Hauskonferenz von Oracle hießen in diesem Jahr Customer Experience und Human Capital Management. Spannend für Partner waren auch die Strategien und die Roadmap von Oracle sowie die Neuigkeiten aus den Bereichen Engineered Systems, Cloud Computing, Business Analytics, Big Data und Customer Experience. Neue Rekorde stellte die Oracle OpenWorld auch im Netz auf: Mehr als 2,1 Millionen Menschen besuchten diese Veranstaltung online und nutzten dabei über 224 Social-Media Kanäle – fast doppelt so viele wie noch vor einem Jahr. Die gute Nachricht: Die Oracle OpenWorld bleibt online, denn es besteht nach wie vor die Möglichkeit, OnDemand-Videos der Keynote- und Session-Highlights anzusehen: Gehen Sie einfach auf Conference Video Highlights  und wählen Sie aus acht Bereichen entweder eine Zusammenfassung oder die vollständige Keynote beziehungsweise Session. Dort finden Sie auch Videos der eigenen Fach-Konferenzen, die im Umfeld der Oracle OpenWorld stattfanden: die JavaOne, die MySQL Connect und der Oracle PartnerNetwork Exchange. Beim Oracle PartnerNetwork Exchange wurden, ganz auf die Fragen und Bedürfnisse der Oracle Partner zugeschnitten, Themen wie Cloud für Partner, Applications, Engineered Systems und Hardware, Big Data, oder Industry Solutions behandelt, und es gab, ganz wichtig, viel Gelegenheit zu Austausch und Vernetzung. Konkret befassten sich dort beispielsweise Sessions mit Cloudanwendungen im Gesundheitsbereich, mit der Erstellung überzeugender Business Cases für Kundengespräche oder mit Mobile und Social Networking. Die aus Deutschland angereisten über 40 Partner trafen sich beim OPN Exchange zu einem anregenden gemeinsamen Abend mit den anderen Teilnehmern. Dass die Oracle OpenWorld auch noch zum sportlichen Highlight werden würde, kam denkbar unerwartet: Zeitgleich mit der Konferenz wurde nämlich in der Bucht von San Francisco die entscheidende 19. Etappe des Americas Cup ausgetragen. Im traditionsreichen Segelwettbewerb lag Team Oracle USA zunächst mit 1:8 zurück, schaffte es aber dennoch, den Sieg vor dem lange Zeit überlegenen Team Neuseeland zu holen und somit den Titel zu verteidigen. Selbstverständlich fand die Oracle OpenWorld auch ein großes Medienecho. Wir haben eine Auswahl für Sie zusammengestellt: - ChannelPartner- Computerwoche - Heise - Silicon über Big Data - Silicon über 12c

    Read the article

  • Massive Silverlight Giveaway! DevExpress , Syncfusion, Crypto Obfuscator and SL Spy!

    - by mbcrump
    Oh my, have we grown! Maybe I should change the name to Multiple Silverlight Giveaways. So far, my Silverlight giveaways have been such a success that I’m going to be able to give away more than one Silverlight product every month. Last month, we gave away 3 great products. 1) ComponentOne Silverlight Controls 2)  ComponentOne XAP Optimizer (with obfuscation) and 3) Silverlight Spy. This month, we will give away 4 great Silverlight products and have 4 different winners. This way the Silverlight community can grow with more than just one person winning all the prizes. This month we will be giving away: DevExpress Silverlight Controls – Over 50+ Silverlight Controls Syncfusion User Interface Edition - Create stunning line of business silverlight applications with a wide range of components including a high performance grid, docking manager, chart, gauge, scheduler and much more. Crypto Obfuscator – Works for all .NET including Silverlight/Windows Phone 7. Silverlight Spy – provides a license EVERY month for this giveaway. ----------------------------------------------------------------------------------------------------------------------------------------------------------- Win a FREE developer’s license of one of the products listed above! 4 winners will be announced on April 1st, 2011! To be entered into the contest do the following things: Subscribe to my feed. – Use Google Reader, email or whatever is best for you.  Leave a comment below with a valid email account (I WILL NOT share this info with anyone.) Retweet the following : I just entered to win free #Silverlight controls from @mbcrump . Register here: http://mcrump.me/fTSmB8 ! Don’t change the URL because this will allow me to track the users that Tweet this page. Don’t forget to visit each of the vendors sites because they made this possible. MichaelCrump.Net provides Silverlight Giveaways every month. You can also see the latest giveaway by bookmarking http://giveaways.michaelcrump.net . ---------------------------------------------------------------------------------------------------------------------------------------------------------- DevExpress Silverlight Controls Let’s take a quick look at some of the software that is provided in this giveaway. Before we get started with the Silverlight Controls, here is a couple of links to bookmark for the DevExpress Silverlight Controls: The Live Demos of the Silverlight Controls is located here. Great Video Tutorials of the Silverlight Controls are here. One thing that I liked about the DevExpress is how easy it was to find demos of each control. After you install the controls the following Program Group appears complete with “demos” that include full-source.   So, the first question that you may ask is, “What is included?” Here is the official list below. I wanted to show several of the controls that I think developers will use the most. The Book – Very rich animation between switching pages. Very easy to add your own images and custom text. The Menu – This is another control that just looked great. You can easily add images to the menu items with a few lines of XAML. The Window / Dialog Box – You can use this control to make a very beautiful “Wizard” to help your users navigate between pages. This is useful in setup or installation. Calculator – This would be useful for any type of Banking app. Also a first that I’ve seen from a 3rd party Control company. DatePicker – This controls feels a lot smoother than the one provided by Microsoft. It also provides the ability to “Clear” the selection. Overall the DevExpress Silverlight Controls feature a lot of quality controls that you should check out. You can go ahead and download a trial version of it right now by clicking here. If you win the contest you can simply enter your registration key and continue using the product without reinstalling. Syncfusion User Interface Edition Before we get started with the Syncfusion User Interface Edition, here is a couple of links to bookmark. The Live Demos can be found here. You can download a demo of it now at http://www.syncfusion.com/downloads/evalstart. After you install the Syncfusion, you can view the dashboard to run locally installed samples. You may also download the documentation to your local machine if needed. Since the name of the package is “User Interface Edition”, I decided to share several samples that struck me as “awesome”. Dashboard Gauges – I was very impressed with the various gauges they have included. The digital clock also looks very impressive. Diagram – The diagrams are also very easy to build. In the sample project below you can drag/drop the shapes onto the content pane. More complex lines like the Bezier lines are also easy to create using Syncfusion. Scheduling – Another strong component was the Scheduling with built-in support for Themes. Tools – If all of that wasn’t enough, it also comes with a nice pack of essential tools. Syncfusion has a nice variety of Silverlight Controls that you should check out. You can go ahead and download a trial version of it right now by clicking here. Crypto Obfuscator The following feature set is what is important to me in an Obfuscator since I am a Silverlight/WP7 Developer: And thankfully this is what you get in Crypto Obfuscator. You can download a trial version right now if you want to go ahead and play with it. Let’s spend a few moments taking a look at the application. After you have installed Crypto Obfuscator you will see the following screen: After you click on Assemblies you have the option to add your .XAP file in: I went ahead and loaded my .xap file from a Silverlight Application. At this point, you can simply save your project and hit “Obfuscate” and your done. You don’t have to mess with any of the other settings if you don’t want too. Of course, you can change the settings and add obfuscation rules, watermarks and signing if you wish.  After Obfuscation, it looks like this in .NET Reflector: I was trying to browse through methods and it actually crashed Reflector. This confirms the level of protection the obfuscator is providing. If this were a commercial application that my team built, I would have a huge smile on my face right now. Crypto Obfuscator is a great product and I hope you will spend the time learning more about it. Silverlight Spy Silverlight Spy is a runtime inspector tool that will tell you pretty much everything that is going on with the application. Basically, you give it a URL that contains a Silverlight application and you can explore the element tree, events, xaml and so much more. This has already been reviewed on MichaelCrump.net. _________________________________________________________________________________________ Thanks for reading and don’t forget to leave a comment below in order to win one of the four prizes available! Subscribe to my feed

    Read the article

  • What Is Nuclear Meltdown?

    - by Gopinath
    Japan was first hit by a massive earth quake, then a ruthless tsunami washed away thousands of homes and now they fear the worst – meltdown of nuclear power stations in the quake hit year. Nuclear meltdowns are horrifying – remember the Chernobyl incident in Russia? The Chernobyl reactor meltdown released 400 times more radio active material than the atomic bombing of Hiroshima. The effects of nuclear meltdowns are beyond imagination of a common man, thousands of people loose their lives and many more lakhs of people suffer with radiation related diseases for many years. Nuclear Meltdowns are dangerous, but how do they happen? What causes a nuclear meltdown? In simple terms – Nuclear meltdown is an accident that happens due to severe overheating of a nuclear reactor and results in release of nuclear radiation into the environment.  How A Nuclear Meltdown Happens? According to Wikipedia A meltdown occurs when a severe failure of a nuclear power plant system prevents proper cooling of the reactor core, to the extent that the nuclear fuel assemblies overheat and melt. A meltdown is considered very serious because of the potential that radioactive materials could be released into the environment. The fuel assemblies in a reactor core can melt if heat is not removed. A nuclear reactor does not have to remain critical for a core damage incident to occur, because decay heat continues to heat the reactor fuel assemblies after the reactor has shut down, though this heat decreases with time. A core damage accident is caused by the loss of sufficient cooling for the nuclear fuel within the reactor core. The reason may be one of several factors, including a loss of pressure control accident, a loss of coolant accident (LOCA), an uncontrolled power excursion or, in some types, a fire within the reactor core. Failures in control systems may cause a series of events resulting in loss of cooling. Contemporary safety principles of defense in depth, ensure that multiple layers of safety systems are always present to make such accidents unlikely. Video – What Causes Nuclear Meltdown AlJazeera news has a good analysis on feared nuclear meltdown of Japan’s nuclear plants and also an animation on what causes Nuclear Meltdown. cc image credit: flickr/jtjdt This article titled,What Is Nuclear Meltdown?, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Collision detection - Smooth wall sliding, no bounce effect

    - by Joey
    I'm working on a basic collision detection system that provides point - OBB collision detection. I have around 200 cubes in my environment and I check (for now) each of them in turn and see if it collides. If it does I return the colliding face's normal, save the old player position and do some trigonometry to return a new player position for my wall sliding. edit I'll define my meaning of wall sliding: If a player walks in a vertical slope and has a slight horizontal rotation to the left or the right and keeps walking forward in the wall the player should slide a little to the right/left while continually walking towards the wall till he left the wall. Thus, sliding along the wall. Everything works fine and with multiple objects as well but I still have one problem I can't seem to figure out: smooth wall sliding. In my current implementation sliding along the walls make my player bounce like a mad man (especially noticable with gravity on and moving forward). I have a velocity/direction vector, a normal vector from the collided plane and an old and new player position. First I negate the normal vector and get my new velocity vector by substracting the inverted normal from my direction vector (which is the vector to slide along the wall) and I add this vector to my new Player position and recalculate the direction vector (in case I have multiple collisions). I know I am missing some step but I can't seem to figure it out. Here is my code for the collision detection (run every frame): Vector direction; Vector newPos(camera.GetOriginX(), camera.GetOriginY(), camera.GetOriginZ()); direction = newPos - oldPos; // Direction vector // Check for collision with new position for(int i = 0; i < NUM_OBJECTS; i++) { Vector normal = objects[i].CheckCollision(newPos.x, newPos.y, newPos.z, direction.x, direction.y, direction.z); if(normal != Vector::NullVector()) { // Get inverse normal (direction STRAIGHT INTO wall) Vector invNormal = normal.Negative(); Vector wallDir = direction - invNormal; // We know INTO wall, and DIRECTION to wall. Substract these and you got slide WALL direction newPos = oldPos + wallDir; direction = newPos - oldPos; } } Any help would be greatly appreciated! FIX I eventually got things up and running how they should thanks to Krazy, I'll post the updated code listing in case someone else comes upon this problem! for(int i = 0; i < NUM_OBJECTS; i++) { Vector normal = objects[i].CheckCollision(newPos.x, newPos.y, newPos.z, direction.x, direction.y, direction.z); if(normal != Vector::NullVector()) { Vector invNormal = normal.Negative(); invNormal = invNormal * (direction * normal).Length(); // Change normal to direction's length and normal's axis Vector wallDir = direction - invNormal; newPos = oldPos + wallDir; direction = newPos - oldPos; } }

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >