Search Results

Search found 15499 results on 620 pages for 'non obvious'.

Page 429/620 | < Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >

  • Exadata support for ACFS (and thus, 10gR2) now available!

    - by Robert Freeman
    Really? Exadata, ACFS and 10gR2? If you work with Exadata you are probably aware that ACFS has not been supported - until now! ACFS is now supported on Exadata if you are running Grid Infrastructure version 12.1.0.2 or later. This new support is described in MOS note 1326938.1. Also Exadata support for ACFS is mentioned in MOS note 888828.1, which is the king of all Exadata notes on MOS. The upshot is that you can now run Oracle Database 10gR2 on Exadata using ACFS as the storage for the Oracle Database. Don’t Over React and just Throw Everything on ACFS!First, let’s be clear that ACFS is not an alternative for running your Exadata databases on ASM. If you are running any production or non-production performance sensitive Oracle databases on 11.2 or 12.1, then you should be running them on ASM disks that are associated with the storage cells. The use case for ACFS is generally limited to the following: Running any Oracle 10gR2 databases on Exadata. Running Oracle 11gR2 development or test databases that require rapid cloning, and that do not require the performance benefits of the Exadata storage cells. If you are running Oracle Database 12c and you need snapshot/clone kinds of capabilities, then you should be using Oracle MultiTennant and the features present in that option (remember though that MultiTennant is a licensed option). The Fine PrintThere are some requirements that you will need to meet If you are going to run ACFS on Exadata. These are: You have to use Oracle Linux You must use GI 12.1.0.2 or later If you wish to use HCC then you must apply the fix for bug 19136936 to your system. This bug, and it’s associated patch do not appear on MOS (as of the time that I wrote this) so you will need to open an SR and get support to provide the patch for you. The Best Use Case for ACFSEven though Oracle Database 10gR2 is at end of life, it remains in use in a large number of places. This has caused problems when choosing to implement Exadata as a consolidation platform, or when choosing it during a hardware refresh process. Now that ACFS is supported, Exadata has become even more flexible and affords customers greater flexibility when migrating to Exadata and Engineered Systems. While all of the features of Exadata might not be available to a 10.2.0.4 database, certainly just the improved processing capabilities of Exadata with its fast as heck infiniband network fabric, additional memory, reduced power requirements and a whole host of other features, justifies moving these databases to Exadata now. This will also make it easier to upgrade these databases when the time comes!

    Read the article

  • How do I get the Apple Wireless Keyboard Working in 10.10?

    - by Jamie
    So I've gone and bought a Magic Mouse and Apple Wireless Non-Numeric Keyboard. The magic mouse worked out-of-the-box almost perfectly, except for the forward/back gesture which still isn't functioning, whereas the keyboard didn't. It has constant trouble with the bluetooth connection. Only the 7, 8 and 9 buttons and volume media keys correspond correctly with the output. Pressing every single key on keyboard has this output: 789/=456*123-0.+ When I use Blueman the keyboard can be setup and shows up in "Devices" but I get a warning when I click "Setup"; "Device added successfully, but failed to connect" (although removing the keyboard and setting it up as a new device doesn't incur this error). Using gnome-bluetooth I have encountered no error messages but it connects properly less often than Blueman and I can still only type the aforementioned output. What am I not doing? Where is this going wrong? EDIT: I have read this http://ubuntuforums.org/showthread.php?t=224673 inside out several times to no avail. It seems these commands don't work for me with the apple peripherals sudo hidd --search hcitool scan Fortunately I have the luxury of a 1TB hard drive, near limitless patience and no job. I have installed a fresh Ubuntu 10.10 64bit (albeit smaller than mine) and after updating and restarting for the first time, I set up my devices in exactly the same way as I have learnt on my original install I succeeded once again with the mouse and, to my joy, with the keyboard also. Though I could not seem to find Alt+F2 and had to reconfigure that and several other keyboard shortcuts, the keyboard is working and in a spectacular fashion. Still, this leaves me with the issue of my original install. I returned to it with some new found knowledge but failed again. Perhaps I have a missing dependancy? I did uninstall bluetooth after the initial set up and reinstalled it recently for the pupose of these peripherals. Maybe it's because I'm running 64bit? This is still not solved, but easily avoided by not changing too much from the original install. Just hide stuff or turn it off, don't uninstall too much.

    Read the article

  • My thoughts on the future of the web with respect to flash, plugins, etc…

    - by joelvarty
    More than 10 years ago I was coding Java applets.  They were great at the time because I could reasonably expect them to run the same way in Netscape and Internet Explorer.  I could also reliably do asynchronous networking back to the server.  But then, Microsoft pulled their native Java runtime from Windows and Internet Explorer.  It got a lot harder to get applets running in people’s browsers. So I started writing ActiveX controls for IE and Java applets for Netscape. Then I switched to Flash, not for too long, but it was enough for me to see that it was a capable and curious implementation of animation, multimedia and script. I even wrote a few Silverlight controls, but then I stopped. I stepped back from all of the “richness” and “interactivity” and I thought about things like accessibility and SEO.  I wondered how my apps and sites might appear to the greater world.  I wondered how the developers I am working with, or who might be inheriting my code down the road, might interact with it. And I thought to myself, What the hell was I thinking? Those embedded controls are not what the web is about, and they run contrary to nearly all of the things that makes the web exciting and fosters innovation within and around.   Those plugins or controls, or whatever you want to refer to them as, are only stop-gaps that fill a hole in the basic HTML/Script/CSS specifications, and that’s all they should ever be used for.  Full stop.  Period.  For instance, I still make use of a nifty little flash control called SWFUpload because it lets me check file size before an upload starts.  I can do the same thing from a Silverlight control.  But rest assured, if I could do this from native javascript, I would in a second.  In fact, the only reason I chose SWFUpload over a ton of other alternatives is that it has a great javascript API so I can do (nearly) all of the UI in regular HTML.  And I ALWAYS provide a non-flash alternative for uploading, and for the rest of any website where the designer has insisted on some piece of creativity that requires flash (usually because the designer is also the flash developer, but that’s an aside…). The web is about openness, and about exposing that openness in such a way that it can be taken advantage of as a small part of a greater whole.  Sure we need security and authentication and ssl and all that stuff, but for me, its something more profound.  For me, the majority of what the web is, is about exposing something that delivers meaning.  What meaning can we derive from an <object> tag?   more later - joel

    Read the article

  • Requesting Delegation (ActAs) Tokens using WSTrustChannel (as opposed to Configuration Madness)

    - by Your DisplayName here!
    Delegation using the ActAs approach has some interesting security features A security token service can make authorization and validation checks before issuing the ActAs token. Combined with proof keys you get non-repudiation features. The ultimate receiver sees the original caller as direct caller and can optionally traverse the delegation chain. Encryption and audience restriction can be tied down Most samples out there (including the SDK sample) use the CreateChannelActingAs extension method from WIF to request ActAs tokens. This method builds on top of the WCF binding configuration which may not always be suitable for your situation. You can also use the WSTrustChannel to request ActAs tokens. This allows direct and programmatic control over bindings and configuration and is my preferred approach. The below method requests an ActAs token based on a bootstrap token. The returned token can then directly be used with the CreateChannelWithIssued token extension method. private SecurityToken GetActAsToken(SecurityToken bootstrapToken) {     var factory = new WSTrustChannelFactory(         new UserNameWSTrustBinding(SecurityMode.TransportWithMessageCredential),         new EndpointAddress(_stsAddress));     factory.TrustVersion = TrustVersion.WSTrust13;     factory.Credentials.UserName.UserName = "middletier";     factory.Credentials.UserName.Password = "abc!123";     var rst = new RequestSecurityToken     {         AppliesTo = new EndpointAddress(_serviceAddress),         RequestType = RequestTypes.Issue,         KeyType = KeyTypes.Symmetric,         ActAs = new SecurityTokenElement(bootstrapToken)     };     var channel = factory.CreateChannel();     var delegationToken = channel.Issue(rst);     return delegationToken; }   HTH

    Read the article

  • Best Practices for Handing over Legacy Code

    - by PersonalNexus
    In a couple of months a colleague will be moving on to a new project and I will be inheriting one of his projects. To prepare, I have already ordered Michael Feathers' Working Effectively with Legacy Code. But this books as well as most questions on legacy code I found so far are concerned with the case of inheriting code as-is. But in this case I actually have access to the original developer and we do have some time for an orderly hand-over. Some background on the piece of code I will be inheriting: It's functioning: There are no known bugs, but as performance requirements keep going up, some optimizations will become necessary in the not too distant future. Undocumented: There is pretty much zero documentation at the method and class level. What the code is supposed to do at a higher level, though, is well-understood, because I have been writing against its API (as a black-box) for years. Only higher-level integration tests: There are only integration tests testing proper interaction with other components via the API (again, black-box). Very low-level, optimized for speed: Because this code is central to an entire system of applications, a lot of it has been optimized several times over the years and is extremely low-level (one part has its own memory manager for certain structs/records). Concurrent and lock-free: While I am very familiar with concurrent and lock-free programming and have actually contributed a few pieces to this code, this adds another layer of complexity. Large codebase: This particular project is more than ten thousand lines of code, so there is no way I will be able to have everything explained to me. Written in Delphi: I'm just going to put this out there, although I don't believe the language to be germane to the question, as I believe this type of problem to be language-agnostic. I was wondering how the time until his departure would best be spent. Here are a couple of ideas: Get everything to build on my machine: Even though everything should be checked into source code control, who hasn't forgotten to check in a file once in a while, so this should probably be the first order of business. More tests: While I would like more class-level unit tests so that when I will be making changes, any bugs I introduce can be caught early on, the code as it is now is not testable (huge classes, long methods, too many mutual dependencies). What to document: I think for starters it would be best to focus documentation on those areas in the code that would otherwise be difficult to understand e.g. because of their low-level/highly optimized nature. I am afraid there are a couple of things in there that might look ugly and in need of refactoring/rewriting, but are actually optimizations that have been out in there for a good reason that I might miss (cf. Joel Spolsky, Things You Should Never Do, Part I) How to document: I think some class diagrams of the architecture and sequence diagrams of critical functions accompanied by some prose would be best. Who to document: I was wondering what would be better, to have him write the documentation or have him explain it to me, so I can write the documentation. I am afraid, that things that are obvious to him but not me would otherwise not be covered properly. Refactoring using pair-programming: This might not be possible to do due to time constraints, but maybe I could refactor some of his code to make it more maintainable while he was still around to provide input on why things are the way they are. Please comment on and add to this. Since there isn't enough time to do all of this, I am particularly interested in how you would prioritize.

    Read the article

  • Keep Track of Your Tasks with toDoo

    - by Asian Angel
    A tasks list can be convenient but most times you can not include details for those tasks or have to have an online account to do so. If you want to keep your tasks list with you on your computer or laptop and be able to add plenty of details then you might want to look at toDoo. Note: Requires Adobe AIR (download link at bottom of article). toDoo in Action Once you have installed toDoo everything is rather straightforward for getting started. The first time that you start toDoo there will be a temporary “fill-in” for the “Subject & Details Areas”. Simply highlight over the temporary text and add your information. Notice that if desired you can easily set a custom date and time for your tasks right below the “Details Area”. Note: toDoo does not minimize to the “System Tray”. Once you have everything set all that you need to do is click on “add task”. Here was our first new task being viewed in the “toDoo Description Tab”. Time to add a second task…here you can see the drop-down calendar. You can scroll through and select a different month very easily…just click on the desired day and it will be automatically set. Adding our second task… If you need to edit any of the details for a particular task you can do so in the “Edit toDoo Tab”. This nice little app is convenient and easy to use. Conclusion ToDoo is a simple straightforward app that lets you keep track of your tasks list and relevant details without an online account (especially helpful if you are without a wireless connection at a given moment). If you are looking for more of a list approach that runs on your desktop, then check out our article on Doomi here. Links Download ToDoo at Softpedia Download ToDoo at Adobe Marketplace Download Adobe AIR Similar Articles Productive Geek Tips Turn Chrome’s New Tab Page into a Google Tasks PageMake To-Do Bar in Outlook 2007 Show Only Today’s TasksAdd a non-Google Tasks List to ChromeKeep Track of Homework Assignments with SoshikuTrack the Amount of Time You Spend Online in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images Get Wildlife Photography Tips at BBC’s PhotoMasterClasses Mashpedia is a Real-time Encyclopedia

    Read the article

  • Exceptional DBA 2011 Jeff Moden on why you should enter in 2012

    - by Red and the Community
    My "reign" as the Red Gate Exceptional DBA is almost over and I was asked to say a few words about this wonderful award. Having been one of those folks that shied away from entering the contest during the first 3 years of the award, I thought I’d spend the time encouraging DBAs of all types to enter. Winning this award has some obvious benefits. You win a trip to PASS including money towards your flight, paid hotel stay, and, of course, paid admission. You win a wonderful bundle of software from Red Gate to make your job as a DBA a whole lot easier. You also win some pretty incredible notoriety for your resume. After all, it’s not everyone who wins a worldwide contest. To date, there are only 4 of us in the world who have won this award. You could be number 5! For me, all of that pales in comparison to what I found out during the entry process. I’m very confident in my skills, but I’m also humble. It was suggested to me that I enter the contest when it first started. I just couldn’t bring myself to nominate myself. When the 2011 nomination period opened up, several people again suggested that I enter, so I swallowed hard and asked several co-workers to have a look at the online nomination form and, if they thought me worthy, to write a nomination for me. I won’t bore you with the details, but what they wrote about me was one of the most incredible rewards that I could ever have hoped to receive. I had no idea of the impact that I’d made on my co-workers. Even if I hadn’t made it to the top 5 for the award, I had already won something very near and dear that no one can ever top. “Even if I hadn’t made it to the top 5 for the award, I had already won something very near and dear that no one can ever top.” There’s only one named winner and 4 "runners up" in this competition every year but don’t let that discourage you. Enter this competition. Even if you work in the proverbial "Mom’n'Pop" shop, get your boss and the people you work with directly to nominate you. Even if you don’t make it to the top 5, you might just find out that you’re more of a winner than you think. If you’re too proud to ask them, then take the time to nominate yourself instead of shying away like I did for the first 3 years. You work hard as a DBA and, as David Poole once said, if you’re the first person that people ask for help rather than one of the last, then you’re probably an Exceptional DBA. It’s time to stand up and be counted! Win or lose, the entry process can be a huge reward in itself. It was for me. Thank you, Red Gate, for giving me such a wonderful opportunity. Thanks for listening folks and for all that you do as DBAs. As ‘Red Green’ says, "We’re all in this together and I’m pullin’ for ya". –Jeff Moden Red Gate Exceptional DBA 2011

    Read the article

  • SQL SERVER – Online Index Rebuilding Index Improvement in SQL Server 2012

    - by pinaldave
    Have you ever faced situation when you see something working and you feel it should not be working? Well, I had similar moments few days ago. I know that SQL Server 2008 supports online indexing. However, I also know that I cannot rebuild index ONLINE if I have used VARCHAR(MAX), NVARCHAR(MAX) or few other data types. While I held my belief very strongly I came across situation, where I had to go online and do little bit reading from Book Online. Here is the similar example. First of all – run following code in SQL Server 2008 or SQL Server 2008 R2. USE TempDB GO CREATE TABLE TestTable (ID INT, FirstCol NVARCHAR(10), SecondCol NVARCHAR(MAX)) GO CREATE CLUSTERED INDEX [IX_TestTable] ON TestTable (ID) GO CREATE NONCLUSTERED INDEX [IX_TestTable_Cols] ON TestTable (FirstCol) INCLUDE (SecondCol) GO USE [tempdb] GO ALTER INDEX [IX_TestTable_Cols] ON [dbo].[TestTable] REBUILD WITH (ONLINE = ON) GO DROP TABLE TestTable GO Now run the same code in SQL Server 2012 version. Observe the difference between both of the execution. You will be get following resultset. In SQL Server 2008/R2 it will throw following error: Msg 2725, Level 16, State 2, Line 1 An online operation cannot be performed for index ‘IX_TestTable_Cols’ because the index contains column ‘SecondCol’ of data type text, ntext, image, varchar(max), nvarchar(max), varbinary(max), xml, or large CLR type. For a non-clustered index, the column could be an include column of the index. For a clustered index, the column could be any column of the table. If DROP_EXISTING is used, the column could be part of a new or old index. The operation must be performed offline. In SQL Server 2012 it will run successfully and will not throw any error. Command(s) completed successfully. I always thought it will throw an error if there is VARCHAR(MAX) or NVARCHAR(MAX) used in table schema definition. When I saw this result it was clear to me that it will be for sure not bug enhancement in SQL Server 2012. For matter for the fact, I always wanted this feature to be added in SQL Server Engine as this will enable ONLINE Index Rebuilding for mission critical tables which needs to be always online. I quickly searched online and landed on Jacob Sebastian’s blog where he has blogged about it as well. Well, is there any other new feature in SQL Server 2012 which gave you good surprise? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Truly understand the threshold for document set in document library in SharePoint

    - by ybbest
    Recently, I am working on an issue with threshold. The problem is that when the user navigates to a view of the document library, it displays the error message “list view threshold is exceeded”. However, in the view, it has no data. The list view threshold limit is 5000 by default for the non-admin user. This limit is not the number of items returned by your query; it is the total number of items the database needs to read to calculate the returned result set. So although the view does not return any result but to calculate the result (no data to show), it needs to access more than 5000 items in the database. To fix the issue, you need to create an index for the column that you use in the filter for the view. Let’s look at the problem in details. You can download a solution to replicate this issue here. 1. Go to Central Admin ==> Web Application Management ==>General Settings==> Click on Resource Throttling 2. Change the list view threshold in web application from 5000 to 2000 so that I can show the problem without loading more than 5000 items into the list. FROM TO 3. Go to the page that displays the approved view of the Loan application document set. It displays the message as shown below although I do not have any data returned for this view. 4. To get around this, you need to create an index column. Go to list settings and click on the Index columns. 5. Click on the “Create a new index” link. 6. Select the LoanStatus field as I use this filed as the filter to create the view. 7. After the index is created now I can access the approved view, as you can see it does not return any data. Notes: List View Threshold: Specify the maximum number of items that a database operation can involve at one time. Operations that exceed this limit are prohibited. References: SharePoint lists V: Techniques for managing large lists Manage large SharePoint lists for better performance http://blogs.technet.com/b/speschka/archive/2009/10/27/working-with-large-lists-in-sharepoint-2010-list-throttling.aspx

    Read the article

  • Subterranean IL: Filter exception handlers

    - by Simon Cooper
    Filter handlers are the second type of exception handler that aren't accessible from C#. Unlike the other handler types, which have defined conditions for when the handlers execute, filter lets you use custom logic to determine whether the handler should be run. However, similar to a catch block, the filter block does not get run if control flow exits the block without throwing an exception. Introducing filter blocks An example of a filter block in IL is the following: .try { // try block } filter { // filter block endfilter }{ // filter handler } or, in v1 syntax, TryStart: // try block TryEnd: FilterStart: // filter block HandlerStart: // filter handler HandlerEnd: .try TryStart to TryEnd filter FilterStart handler HandlerStart to HandlerEnd In the v1 syntax there is no end label specified for the filter block. This is because the filter block must come immediately before the filter handler; the end of the filter block is the start of the filter handler. The filter block indicates to the CLR whether the filter handler should be executed using a boolean value on the stack when the endfilter instruction is run; true/non-zero if it is to be executed, false/zero if it isn't. At the start of the filter block, and the corresponding filter handler, a reference to the exception thrown is pushed onto the stack as a raw object (you have to manually cast to System.Exception). The allowed IL inside a filter block is tightly controlled; you aren't allowed branches outside the block, rethrow instructions, and other exception handling clauses. You can, however, use call and callvirt instructions to call other methods. Filter block logic To demonstrate filter block logic, in this example I'm filtering on whether there's a particular key in the Data dictionary of the thrown exception: .try { // try block } filter { // Filter starts with exception object on stack // C# code: ((Exception)e).Data.Contains("MyExceptionDataKey") // only execute handler if Contains returns true castclass [mscorlib]System.Exception callvirt instance class [mscorlib]System.Collections.IDictionary [mscorlib]System.Exception::get_Data() ldstr "MyExceptionDataKey" callvirt instance bool [mscorlib]System.Collections.IDictionary::Contains(object) endfilter }{ // filter handler // Also starts off with exception object on stack callvirt instance string [mscorlib]System.Object::ToString() call void [mscorlib]System.Console::WriteLine(string) } Conclusion Filter exception handlers are another exception handler type that isn't accessible from C#, however, just like fault handlers, the behaviour can be replicated using a normal catch block: try { // try block } catch (Exception e) { if (!FilterLogic(e)) throw; // handler logic } So, it's not that great a loss, but it's still annoying that this functionality isn't directly accessible. Well, every feature starts off with minus 100 points, so it's understandable why something like this didn't make it into the C# compiler ahead of a different feature.

    Read the article

  • ASP.NET Routing not working on IIS 7.0

    - by Rick Strahl
    I ran into a nasty little problem today when deploying an application using ASP.NET 4.0 Routing to my live server. The application and its Routing were working just fine on my dev machine (Windows 7 and IIS 7.5), but when I deployed (Windows 2008 R1 and IIS 7.0) Routing would just not work. Every time I hit a routed url IIS would just throw up a 404 error: This is an IIS error, not an ASP.NET error so this doesn’t actually come from ASP.NET’s routing engine but from IIS’s handling of expressionless URLs. Note that it’s clearly falling through all the way to the StaticFile handler which is the last handler to fire in the typical IIS handler list. In other words IIS is trying to parse the extension less URL and not firing it into ASP.NET but failing. As I mentioned on my local machine this all worked fine and to make sure local and live setups match I re-copied my Web.config, double checked handler mappings in IIS and re-copied the actual application assemblies to the server. It all looked exactly matched. However no workey on the server with IIS 7.0!!! Finally, totally by chance, I remembered the runAllManagedModulesForAllRequests attribute flag on the modules key in web.config and set it to true: <system.webServer> <modules runAllManagedModulesForAllRequests="true"> <add name="ScriptCompressionModule" type="Westwind.Web.ScriptCompressionModule,Westwind.Web" /> </modules> </system.webServer> And lo and behold, Routing started working on the live server and IIS 7.0! This seems really obvious now of course, but the really tricky thing about this is that on IIS 7.5 this key is not necessary. So on my Windows 7 machine ASP.NET Routing was working just fine without the key set. However on IIS 7.0 on my live server the same missing setting was not working. On IIS 7.0 this key must be present or Routing will not work. Oddly on IIS 7.5 it appears that you can’t even turn off the behavior – setting runtAllManagedModuleForAllRequests="false" had no effect at all and Routing continued to work just fine even with the flag set to false, which is NOT what I would have expected. Kind of disappointing too that Windows Server 2008 (R1) can’t be upgraded to IIS 7.5. It sure seems like that should have been possible since the OS server core changes in R2 are pretty minor. For the future I really hope Microsoft will allow updating IIS versions without tying them explicitly to the OS. It looks like that with the release of IIS Express Microsoft has taken some steps to untie some of those tight OS links from IIS. Let’s hope that’s the case for the future – it sure is nice to run the same IIS version on dev and live boxes, but upgrading live servers is too big a deal to do just because an updated OS release came out. Moral of the story – never assume that your dev setup will work as is on the live setup. It took me forever to figure this out because I assumed that because my web.config on the local machine was fine and working and I copied all relevant web.config data to the server it can’t be the configuration settings. I was looking everywhere but in the .config file forever before getting desperate and remembering the flag when I accidentally checked the intellisense settings in the modules key. Never assume anything. The other moral is: Try to keep your dev machine and server OS’s in sync whenever possible. Maybe it’s time to upgrade to Windows Server 2008 R2 after all. More info on Extensionless URLs in IIS Want to find out more exactly on how extensionless Urls work on IIS 7? The check out  How ASP.NET MVC Routing Works and its Impact on the Performance of Static Requests which goes into great detail on the complexities of the process. Thanks to Jeff Graves for pointing me at this article – a great linked reference for this topic!© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7  Windows  

    Read the article

  • Dell XPS 15 (L502x) sound problem

    - by lauramolenaar
    I have a problem with my sound on my Dell XPS 15. First, when I had Windows 7, my sound was pretty good (I have a JBL 2.1 speaker system with Waves Maxx audio), but since I installed Ubuntu 12.04 it sounded very cheap and as if I put my laptop in a tin can. I've already tried installing alsa-hda-dkms from the alsa-daily ppa (http://ppa.launchpad.net/ubuntu-audio-dev/alsa-daily) This is my audio controller: laura@laura-XPS-L502X:~$ lspci -v | grep -A7 -i audio 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) Subsystem: Dell Device 04b6 Flags: bus master, fast devsel, latency 0, IRQ 51 Memory at f1c00000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel I hope you can help me, and you can always ask me for more information. Result of aplay -l: **** List of PLAYBACK Hardware Devices **** card 0: PCH [HDA Intel PCH], device 0: ALC665 Analog [ALC665 Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 card 0: PCH [HDA Intel PCH], device 1: ALC665 Digital [ALC665 Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: PCH [HDA Intel PCH], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 Result of aplay -L: default Playback/recording through the PulseAudio sound server sysdefault:CARD=PCH HDA Intel PCH, ALC665 Analog Default Audio Device front:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Front speakers surround40:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 4.0 Surround output to Front and Rear speakers surround41:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers iec958:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Digital IEC958 (S/PDIF) Digital Audio Output hdmi:CARD=PCH,DEV=0 HDA Intel PCH, HDMI 0 HDMI Audio Output dmix:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Direct sample mixing device dmix:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Direct sample mixing device dmix:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Direct sample mixing device dsnoop:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Direct sample snooping device dsnoop:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Direct sample snooping device dsnoop:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Direct sample snooping device hw:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Direct hardware device without any conversions hw:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Direct hardware device without any conversions hw:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Direct hardware device without any conversions plughw:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Hardware device with all software conversions plughw:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Hardware device with all software conversions plughw:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Hardware device with all software conversions

    Read the article

  • A debugging experience with "highly compatible" ASP.NET 4.5

    - by Jeff
    I have to admit that I will pretty much upgrade software for no reason other than being on the latest version. I won't do it if it's super expensive (Adobe gets money from me about once every three or four years at best), but particularly with frameworks and stuff generally available as part of my MSDN subscription, I'll be bleeding edge. CoasterBuzz was running on the MVC 4 framework pretty much as soon as they did a "go live" license for it. I didn't really jump in head-first with Windows 8 and Visual Studio 2012, in part because I just wasn't interested in doing the reinstalls for each new version. Turns out there weren't that many revisions anyway. But when the final versions were released a week and a half ago, I jumped in. I saw on one of the Microsoft sites that .Net 4.5 was a "highly compatible in-place update" to the framework. Good enough for me. I was obviously running it by default in Windows 8, and installed it on my production server. I suppose it's "highly compatible," except when it isn't. Three of my sites are running with various flavors of the MVC version of POP Forums. All of them stopped working under ASP.NET 4.5. It was not immediately obvious what the problem might be beyond an exception indicating that there were no repository classes registered with Ninject, which I use for dependency injection in the forums. This was made all the more weird by the fact that it ran fine locally in the dev Web host. My first instinct was to spin up a Windows Server VM on my local box and put the remote debugger on it. (Side note: running multiple VM's on a Retina MacBook Pro with 16 gigs of RAM is pretty much the most awesome thing ever. I can't believe this computer is for real, and not a 50-pound tower under my desk.) What might have been going on in IIS that doesn't happen in Visual Studio? In the debugging process, I realized that I might be looking in the wrong place. POP Forums creates a Ninject container using a method called from a PreApplicationStartMethod attribute, and at that time registers a module (what Ninject uses to map interfaces to implementations) that maps all of the core dependencies. It also creates an instance of an HttpModule that originally hosted the "services" (search indexing, mailer, etc.), but now just records errors. That's all well and good, but the actual repository mapping, where data is actually read or persisted, happens in Application_Start() in global.asax. The idea there is that you can swap out the SqlSingleWebServer repos for something tuned for multiple servers, Oracle or something else. Of course, if I used something like StructureMap, which does convention-based mapping for dependency injection (a class implementing ISettingsRepository called SettingsRepository is automagically mapped), I wouldn't have to worry about it. In any case, the HttpModule, being instantiated before Application_Start() gets to run, would throw because there was no repo mapped where it could get settings from the database. This makes total sense. The fix is sort of a hack, where I don't setup the innards of the HttpModule until a call to its BeginRequest is made. I say it's a hack, because its primary function, logging exceptions, won't work until the app has warmed up. Still, this brings up an interesting question about the race condition, and what changed in 4.5 when it's running in IIS. In ASP.NET 4, it would appear that the code called via the PreApplicationStartMethod was either failing silently, and running again later, or it was getting to that code after Application_Start was called. In any case, weird thing. The real pain point I'm experiencing now is a bug in MVC 4 that is extremely serious because it renders the mobile/alternate view functionality very much broken.

    Read the article

  • Microsoft ReportViewer SetParameters continuous refresh issue

    - by Ilya Verbitskiy
    Originally posted on: http://geekswithblogs.net/ilich/archive/2013/10/16/microsoft-reportviewer-setparameters-continuous-refresh-issue.aspxI am a big fun of using ASP.NET MVC for building web-applications. It allows us to create simple, robust and testable solutions. However, .NET world is not perfect. There is tons of code written in ASP.NET web-forms. You cannot simply ignore it, even if you want to. Sometimes ASP.NET web-forms controls bring us non-obvious issues. The good example is Microsoft ReportViewer control. I have an example for you. 1: <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %> 2: <%@ Register Assembly="Microsoft.ReportViewer.WebForms, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91" Namespace="Microsoft.Reporting.WebForms" TagPrefix="rsweb" %> 3:   4: <!DOCTYPE html> 5:   6: <html xmlns="http://www.w3.org/1999/xhtml"> 7: <head runat="server"> 8: <title>Report Viewer Continiuse Resfresh Issue Example</title> 9: </head> 10: <body> 11: <form id="form1" runat="server"> 12: <div> 13: <asp:ScriptManager runat="server"></asp:ScriptManager> 14: <rsweb:ReportViewer ID="_reportViewer" runat="server" Width="100%" Height="100%"></rsweb:ReportViewer> 15: </div> 16: </form> 17: </body> 18: </html>   The back-end code is simple as well. I want to show a report with some parameters to a user. 1: protected void Page_Load(object sender, EventArgs e) 2: { 3: _reportViewer.ProcessingMode = ProcessingMode.Remote; 4: _reportViewer.ShowParameterPrompts = false; 5:   6: var serverReport = _reportViewer.ServerReport; 7: serverReport.ReportServerUrl = new Uri("http://localhost/ReportServer_SQLEXPRESS"); 8: serverReport.ReportPath = "/Reports/TestReport"; 9:   10: var reportParameter1 = new ReportParameter("Parameter1"); 11: reportParameter1.Values.Add("Hello World!"); 12:   13: var reportParameter2 = new ReportParameter("Parameter2"); 14: reportParameter2.Values.Add("10/16/2013"); 15:   16: var reportParameter3 = new ReportParameter("Parameter3"); 17: reportParameter3.Values.Add("10"); 18:   19: serverReport.SetParameters(new[] { reportParameter1, reportParameter2, reportParameter3 }); 20: }   I set ShowParametersPrompts to false because I do not want user to refine the search. It looks good until you run the report. The report will refresh itself all the time. The problem caused by ServerReport.SetParameters method in Page_Load. The method cause ReportViewer control to execute the report on the NEXT post back. That is why the page has continuous post-backs. The fix is very simple: do nothing if Page_Load method executed during post-back. 1: protected void Page_Load(object sender, EventArgs e) 2: { 3: if (IsPostBack) 4: { 5: return; 6: } 7:   8: _reportViewer.ProcessingMode = ProcessingMode.Remote; 9: _reportViewer.ShowParameterPrompts = false; 10:   11: var serverReport = _reportViewer.ServerReport; 12: serverReport.ReportServerUrl = new Uri("http://localhost/ReportServer_SQLEXPRESS"); 13: serverReport.ReportPath = "/Reports/TestReport"; 14:   15: var reportParameter1 = new ReportParameter("Parameter1"); 16: reportParameter1.Values.Add("Hello World!"); 17:   18: var reportParameter2 = new ReportParameter("Parameter2"); 19: reportParameter2.Values.Add("10/16/2013"); 20:   21: var reportParameter3 = new ReportParameter("Parameter3"); 22: reportParameter3.Values.Add("10"); 23:   24: serverReport.SetParameters(new[] { reportParameter1, reportParameter2, reportParameter3 }); 25: } You can download sample code from GitHub - https://github.com/ilich/Examples/tree/master/ReportViewerContinuousRefresh

    Read the article

  • Exceptional DBA 2011 Jeff Moden on why you should enter in 2012

    - by RedAndTheCommunity
    My "reign" as the Red Gate Exceptional DBA is almost over and I was asked to say a few words about this wonderful award. Having been one of those folks that shied away from entering the contest during the first 3 years of the award, I thought I'd spend the time encouraging DBAs of all types to enter. Winning this award has some obvious benefits. You win a trip to PASS including money towards your flight, paid hotel stay, and, of course, paid admission. You win a wonderful bundle of software from Red Gate to make your job as a DBA a whole lot easier. You also win some pretty incredible notoriety for your resume. After all, it's not everyone who wins a worldwide contest. To date, there are only 4 of us in the world who have won this award. You could be number 5! For me, all of that pales in comparison to what I found out during the entry process. I'm very confident in my skills, but I'm also humble. It was suggested to me that I enter the contest when it first started. I just couldn't bring myself to nominate myself. When the 2011 nomination period opened up, several people again suggested that I enter, so I swallowed hard and asked several co-workers to have a look at the online nomination form and, if they thought me worthy, to write a nomination for me. I won't bore you with the details, but what they wrote about me was one of the most incredible rewards that I could ever have hoped to receive. I had no idea of the impact that I'd made on my co-workers. Even if I hadn't made it to the top 5 for the award, I had already won something very near and dear that no one can ever top. "Even if I hadn't made it to the top 5 for the award, I had already won something very near and dear that no one can ever top." There's only one named winner and 4 "runners up" in this competition every year but don't let that discourage you. Enter this competition. Even if you work in the proverbial "Mom'n'Pop" shop, get your boss and the people you work with directly to nominate you. Even if you don't make it to the top 5, you might just find out that you're more of a winner than you think. If you're too proud to ask them, then take the time to nominate yourself instead of shying away like I did for the first 3 years. You work hard as a DBA and, as David Poole once said, if you're the first person that people ask for help rather than one of the last, then you're probably an Exceptional DBA. It's time to stand up and be counted! Win or lose, the entry process can be a huge reward in itself. It was for me. Thank you, Red Gate, for giving me such a wonderful opportunity. Thanks for listening folks and for all that you do as DBAs. As 'Red Green' says, "We're all in this together and I'm pullin' for ya". --Jeff Moden Red Gate Exceptional DBA 2011

    Read the article

  • Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04

    - by Asian Angel
    Is your computer or virtualization software unable to display the new 3D version of the Unity Interface in Ubuntu? Now you can access and enjoy the 2D version with just a little PPA magic added to your system! To add the new PPA open the Ubuntu Software Center, go to the Edit Menu, and select Software Sources. Access the Other Software Tab in the Software Sources Window and add the first of the PPAs shown below (outlined in red). The second PPA will be automatically added to your system. Once you have the new PPAs set up, go back to the Ubuntu Software Center and click on the PPA listing for Unity 2D on the left (highlighted with red in the image). Scroll down until you find the listing for “Unity interface for non-accelerated graphics cards – unity-2d” and click Install. Once that is done you are ready to go to System, Administration, and then select Login Screen in your Ubuntu Menu. Unlock the screen and select Unity 2D as the default session from the drop-down list as shown here. Log out and then back in to start enjoying that Unity 2D goodness! Here is how things will look when you click on the Ubuntu Menu Icon. Select the category that you would like to start with (such as Web) and get ready to have fun. This definitely looks (and works) awesome! Enjoy your new Unity 2D Interface! Unity 2D Packaging PPA [Launchpad] Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents Peaceful Tropical Cavern Wallpaper

    Read the article

  • SQL SERVER – Monday Morning Puzzle – Query Returns Results Sometimes but Not Always

    - by pinaldave
    The amount of email I receive sometime it is impossible for me to answer every email. Nonetheless I try to answer pretty much every email I receive. However, quite often I receive such questions in email that I have no answer to them because either emails are not complete or they are out of my domain expertise. In recent times I received one email which had only one or two lines but indeed attracted my attention to it. The question was bit vague but it indeed made me think. The answer was not straightforward so I had to keep on writing the answer as I remember it. However, after writing the answer I do not feel satisfied. Let me put this question in front of you and see if we all can come up with a comprehensive answer. Question: I am beginner with SQL Server. I have one query, it sometime returns a result and sometime it does not return me the result. Where should I start looking for a solution and what kind of information I should send to you so you can help me with solving. I have no clue, please guide me. Well, if you read the question, it is indeed incomplete and it does not contain much of the information at all. I decided to help him and here is the answer, which I started to compose. Answer: As there are not much information in the original question, I am not confident what will solve your problem. However, here are the few things which you can try to look at and see if that solves your problem. Check parameter which is passed to the query. Is the parameter changing at various executions? Check connection string – is there some kind of logic around it? Do you have a non-deterministic component in your query logic? (In other words – does your result is based on current date time or any other time based function?) Are you facing time out while running your query? Is there any error in error log? What is the business logic in your query? Do you have all the valid permissions to all the objects used in the query? Are permissions changing or query accessing a different object in various executions? (Add your suggestions here) Meanwhile, have you ever faced this situation? If yes, do share your experience in the comment area. I will send a copy of my book SQL Server Interview Questions and Answers to one of the most interesting comment. The winner will be announced by next Monday.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • mysql completely removing

    - by Dmitry Teplyakov
    I broke my mysql and now I want to completely reinstall it. I tried: $ sudo apt-get install --reinstall mysql-server $ sudo apt-get remove --purge mysql-client mysql-server But always I see popup with proposal to change root password, I change it and got an error that I can change it.. $ sudo apt-get remove --purge mysql-client mysql-server Reading package lists... Done Building dependency tree Reading state information... Done Package mysql-client is not installed, so not removed Package mysql-server is not installed, so not removed The following packages were automatically installed and are no longer required: libmygpo-qt1 libqtscript4-network libqtscript4-gui libtag-extras1 libqtscript4-sql libqtscript4-xml amarok-utils amarok-common libqtscript4-uitools liblastfm0 libloudmouth1-0 libqtscript4-core Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up mysql-server-5.5 (5.5.28-0ubuntu0.12.04.2) ... 121114 19:04:03 [Note] Plugin 'FEDERATED' is disabled. 121114 19:04:03 InnoDB: The InnoDB memory heap is disabled 121114 19:04:03 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121114 19:04:03 InnoDB: Compressed tables use zlib 1.2.3.4 121114 19:04:03 InnoDB: Initializing buffer pool, size = 128.0M 121114 19:04:03 InnoDB: Completed initialization of buffer pool InnoDB: Error: auto-extending data file ./ibdata1 is of a different size InnoDB: 0 pages (rounded down to MB) than specified in the .cnf file: InnoDB: initial 640 pages, max 0 (relevant if non-zero) pages! 121114 19:04:03 InnoDB: Could not open or create data files. 121114 19:04:03 InnoDB: If you tried to add new data files, and it failed here, 121114 19:04:03 InnoDB: you should now edit innodb_data_file_path in my.cnf back 121114 19:04:03 InnoDB: to what it was, and remove the new ibdata files InnoDB created 121114 19:04:03 InnoDB: in this failed attempt. InnoDB only wrote those files full of 121114 19:04:03 InnoDB: zeros, but did not yet use them in any way. But be careful: do not 121114 19:04:03 InnoDB: remove old data files which contain your precious data! 121114 19:04:03 [ERROR] Plugin 'InnoDB' init function returned error. 121114 19:04:03 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121114 19:04:03 [ERROR] Unknown/unsupported storage engine: InnoDB 121114 19:04:03 [ERROR] Aborting 121114 19:04:03 [Note] /usr/sbin/mysqld: Shutdown complete start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: mysql-server-5.5 E: Sub-process /usr/bin/dpkg returned an error code (1) It is good for me that I have not any important databases, but..

    Read the article

  • Code is not the best way to draw

    - by Bertrand Le Roy
    It should be quite obvious: drawing requires constant visual feedback. Why is it then that we still draw with code in so many situations? Of course it’s because the low-level APIs always come first, and design tools are built after and on top of those. Existing design tools also don’t typically include complex UI elements such as buttons. When we launched our Touch Display module for Netduino Go!, we naturally built APIs that made it easy to draw on the screen from code, but very soon, we felt the limitations and tedium of drawing in code. In particular, any modification requires a modification of the code, followed by compilation and deployment. When trying to set-up buttons at pixel precision, the process is not optimal. On the other hand, code is irreplaceable as a way to automate repetitive tasks. While tools like Illustrator have ways to repeat graphical elements, they do so in a way that is a little alien and counter-intuitive to my developer mind. From these reflections, I knew that I wanted a design tool that would be structurally code-centric but that would still enable immediate feedback and mouse adjustments. While thinking about the best way to achieve this goal, I saw this fantastic video by Bret Victor: The key to the magic in all these demos is permanent execution of the code being edited. Whenever a parameter is being modified, everything is re-executed immediately so that the impact of the modification is instantaneously visible. If you do this all the time, the code and the result of its execution fuse in the mind of the user into dual representations of a single object. All mental barriers disappear. It’s like magic. The tool I built, Nutshell, is just another implementation of this principle. It manipulates a list of graphical operations on the screen. Each operation has a nice editor, and translates into a bit of code. Any modification to the parameters of the operation will modify the bit of generated code and trigger a re-execution of the whole program. This happens so fast that it feels like the drawing reacts instantaneously to all changes. The order of the operations is also the order in which the code gets executed. So if you want to bring objects to the front, move them down in the list, and up if you want to move them to the back: But where it gets really fun is when you start applying code constructs such as loops to the design tool. The elements that you put inside of a loop can use the loop counter in expressions, enabling crazy scenarios while retaining the real-time edition features. When you’re done building, you can just deploy the code to the device and see it run in its native environment: This works thanks to two code generators. The first code generator is building JavaScript that is executed in the browser to build the canvas view in the web page hosting the tool. The second code generator is building the C# code that will run on the Netduino Go! microcontroller and that will drive the display module. The possibilities are fascinating, even if you don’t care about driving small touch screens from microcontrollers: it is now possible, within a reasonable budget, to build specialized design tools for very vertical applications. Direct feedback is a powerful ally in many domains. Code generation driven by visual designers has become more approachable than ever thanks to extraordinary JavaScript libraries and to the powerful development platform that modern browsers provide. I encourage you to tinker with Nutshell and let it open your eyes to new possibilities that you may not have considered before. It’s open source. And of course, my company, Nwazet, can help you develop your own custom browser-based direct feedback design tools. This is real visual programming…

    Read the article

  • 10.10 freezing when sftp downloading

    - by aGr
    My ubuntu 10.10 is freezing during downloading from my other computer on sftp. I thought it might be some nautilus issues so I tried it via command line and I got the same thing - after few minutes the whole computer freezes. Mostly the numlock led is blinking (I've heard somewhere that this means a kernel panic), but not in 100% cases. I dunno if that helps but here is a log from /var/log/message in the time that this happened. At least I hope so - it wasn't that big, when it happened before. But this looks quite "errorish", right? (isn't complete - see bottom) Jan 5 17:57:49 tomas-ntb kernel: imklog 4.2.0, log source = /proc/kmsg started. Jan 5 17:57:49 tomas-ntb rsyslogd: [origin software="rsyslogd" swVersion="4.2.0" x-pid="953" x-info="http://www.rsyslog.com"] (re)start Jan 5 17:57:49 tomas-ntb rsyslogd: rsyslogd's groupid changed to 103 Jan 5 17:57:49 tomas-ntb rsyslogd: rsyslogd's userid changed to 101 Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] Initializing cgroup subsys cpuset Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] Initializing cgroup subsys cpu Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] Linux version 2.6.35-24-generic (buildd@vernadsky) (gcc version 4.4.5 (Ubuntu/Linaro 4.4.4-14ubuntu5) ) #42-Ubuntu SMP Thu Dec 2 01:41:57 UTC 2010 (Ubuntu 2.6.35-24.42-generic 2.6.35.8) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-provided physical RAM map: Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 0000000000000000 - 000000000009e800 (usable) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 000000000009e800 - 00000000000a0000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000000e0000 - 0000000000100000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 0000000000100000 - 00000000dffa8000 (usable) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000dffa8000 - 00000000dffb0000 (ACPI NVS) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000dffb0000 - 00000000dffbff00 (ACPI data) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000dffbff00 - 00000000dfff0000 (ACPI NVS) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000dfff0000 - 00000000e0000000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000fec00000 - 00000000fec01000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000fee00000 - 00000000fee01000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000ff700000 - 0000000100000000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 0000000100000000 - 0000000120000000 (usable) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 0000000120000000 - 0000000140000000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] Notice: NX (Execute Disable) protection cannot be enabled: non-PAE kernel! Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] DMI 2.5 present. Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] AMI BIOS detected: BIOS may corrupt low RAM, working around it. Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] last_pfn = 0xdffa8 max_arch_pfn = 0x100000 Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106 Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] Scanning 0 areas for low memory corruption Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] modified physical RAM map: Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] modified: 0000000000000000 - 0000000000010000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] modified: 0000000000010000 - 000000000009e800 (usable) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] modified: 000000000009e800 - 00000000000a0000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] modified: 00000000000e0000 - 0000000000100000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] modified: 0000000000100000 - 00000000dffa8000 (usable) ... to long - for whole and better view click here

    Read the article

  • NEC Corporation uPD720200 USB 3.0 controller doesn't run at full speed

    - by Radek Zyskowski
    I have fresh install of Ubuntu 10.10. I have external HD on USB 3.0. Trying to connect this via PCI Express NEC controller. dmesg: [ 8966.820078] usb 6-3: new high speed USB device using xhci_hcd and address 0 [ 8966.839831] xhci_hcd 0000:02:00.0: WARN: short transfer on control ep [ 8966.840580] xhci_hcd 0000:02:00.0: WARN: short transfer on control ep [ 8966.841329] xhci_hcd 0000:02:00.0: WARN: short transfer on control ep [ 8966.842079] xhci_hcd 0000:02:00.0: WARN: short transfer on control ep [ 8966.843343] scsi8 : usb-storage 6-3:1.0 [ 8967.847144] scsi 8:0:0:0: Direct-Access SAMSUNG HD204UI 1AQ1 PQ: 0 ANSI: 5 [ 8967.847589] sd 8:0:0:0: Attached scsi generic sg2 type 0 [ 8967.847923] sd 8:0:0:0: [sdb] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB) [ 8967.848341] xhci_hcd 0000:02:00.0: WARN: Stalled endpoint [ 8967.850959] sd 8:0:0:0: [sdb] Write Protect is off [ 8967.850963] sd 8:0:0:0: [sdb] Mode Sense: 23 00 00 00 [ 8967.850966] sd 8:0:0:0: [sdb] Assuming drive cache: write through [ 8967.851818] xhci_hcd 0000:02:00.0: WARN: Stalled endpoint [ 8967.852365] sd 8:0:0:0: [sdb] Assuming drive cache: write through [ 8967.852370] sdb: sdb1 [ 8967.871315] xhci_hcd 0000:02:00.0: WARN: Stalled endpoint [ 8967.871853] sd 8:0:0:0: [sdb] Assuming drive cache: write through [ 8967.871856] sd 8:0:0:0: [sdb] Attached SCSI disk [ 8967.950728] xhci_hcd 0000:02:00.0: WARN: Stalled endpoint [ 8967.951355] sd 8:0:0:0: [sdb] Sense Key : Recovered Error [current] [descriptor] [ 8967.951361] Descriptor sense data with sense descriptors (in hex): [ 8967.951363] 72 01 04 1d 00 00 00 0e 09 0c 00 00 00 00 00 00 [ 8967.951375] 00 00 00 00 00 50 [ 8967.951380] sd 8:0:0:0: [sdb] ASC=0x4 ASCQ=0x1d [ 8968.790076] xhci_hcd 0000:02:00.0: HC died; cleaning up [ 8968.790076] usb 6-3: USB disconnect, address 2 [ 8999.008554] scsi 8:0:0:0: [sdb] Unhandled error code [ 8999.008558] scsi 8:0:0:0: [sdb] Result: hostbyte=DID_TIME_OUT driverbyte=DRIVER_OK [ 8999.008562] scsi 8:0:0:0: [sdb] CDB: Read(10): 28 00 74 70 97 39 00 00 3e 00 [ 8999.008573] end_request: I/O error, dev sdb, sector 1953535801 [ 8999.008578] Buffer I/O error on device sdb1, logical block 1953535738 [ 8999.008582] Buffer I/O error on device sdb1, logical block 1953535739 [ 8999.008585] Buffer I/O error on device sdb1, logical block 1953535740 [ 8999.008589] Buffer I/O error on device sdb1, logical block 1953535741 [ 8999.008592] Buffer I/O error on device sdb1, logical block 1953535742 [ 8999.008595] Buffer I/O error on device sdb1, logical block 1953535743 [ 8999.008600] Buffer I/O error on device sdb1, logical block 1953535744 [ 8999.008603] Buffer I/O error on device sdb1, logical block 1953535745 [ 8999.008606] Buffer I/O error on device sdb1, logical block 1953535746 [ 8999.008609] Buffer I/O error on device sdb1, logical block 1953535747 [ 8999.008642] scsi 8:0:0:0: rejecting I/O to offline device [ 8999.008747] scsi 8:0:0:0: [sdb] Unhandled error code [ 8999.008749] scsi 8:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 8999.008752] scsi 8:0:0:0: [sdb] CDB: Read(10): 28 00 74 70 97 77 00 00 3e 00 [ 8999.008760] end_request: I/O error, dev sdb, sector 1953535863 sudo lspci -v 2:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) (prog-if 30) Physical Slot: 32 Flags: bus master, fast devsel, latency 0, IRQ 16 Memory at fe9fe000 (64-bit, non-prefetchable) [size=8K] Capabilities: [50] Power Management version 3 Capabilities: [70] MSI: Enable- Count=1/8 Maskable- 64bit+ Capabilities: [90] MSI-X: Enable- Count=8 Masked- Capabilities: [a0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [140] Device Serial Number ff-ff-ff-ff-ff-ff-ff-ff Capabilities: [150] #18 Kernel driver in use: xhci_hcd Kernel modules: xhci-hcd If I try to put into this controller any USB 2.0, it works fine. But USB 3.0 nope. Any idea?

    Read the article

  • Autoscaling in a modern world&hellip;. Part 4

    - by Steve Loethen
    Now that I have the rules and services XML files in the cloud, it is time to sever the bounds of earth and live totally in the cloud.  I have to host the Autoscaling object in Azure as well, point it to the rules, tell it the management certs and get out of the way. A couple of questions.  Where to host?  The most obvious place to me was a worker role.  A simple, single purpose worker role, doing nothing but watching my app.  Here are the steps I used. 1) Created a project.  Separate project from my web site.  I wanted to be able to run the web in the cloud and the autoscaler local for debugging purposes.  Seemed like the easiest way.  2) Add the Wasabi block to the project. 3) Configure the settings.  I used the same settings used for the console app.  It points to the same web role, uses the same rules file.  4) Make sure the certification needed to manage the role is added to the cert store in the sky (“LocalMachine” and “My” are default locations). I ran the worker role in the local fabric.  It worked.  I then published to the cloud, and verified it worked again.  Here is what my code looked like. public override bool OnStart() { Trace.WriteLine("Set Default Connection Limit", "Information"); // Set the maximum number of concurrent connections ServicePointManager.DefaultConnectionLimit = 12; Trace.WriteLine("Set up configuration change code", "Information"); // set up config CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) => configSetter(RoleEnvironment.GetConfigurationSettingValue(configName))); Trace.WriteLine("Get current diagnostic configuration", "Information"); // Get current diagnostic configuration DiagnosticMonitorConfiguration dmc = DiagnosticMonitor.GetDefaultInitialConfiguration(); Trace.WriteLine("Set Diagnostic Buffer Size", "Information"); // Set Diagnostic Buffer size dmc.Logs.BufferQuotaInMB = 4; Trace.WriteLine("Set log transfer period", "Information"); // Set log transfer period dmc.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1); Trace.WriteLine("Set log verbosity", "Information"); // Set log filter to verbose dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose; Trace.WriteLine("Start the diagnostic monitor", "Information"); // Start the diagnostic monitor DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", dmc); Trace.WriteLine("Get the current Autoscaler from the EntLib Container", "Information"); // Get the current Autoscaler from the EntLib Container scaler = EnterpriseLibraryContainer.Current.GetInstance<Autoscaler>(); Trace.WriteLine("Start the autoscaler", "Information"); // Start the autoscaler scaler.Start(); Trace.WriteLine("call the base class OnStart", "Information"); // call the base class OnStart return base.OnStart(); } public override void OnStop() { Trace.WriteLine("Stop the Autoscaler", "Information"); // Stop the Autoscaler scaler.Stop(); } I did have to turn on some basic logging for wasabi, which will cover in the next post.  This let me figure out that I hadn’t done the certificate step.

    Read the article

  • Walking to the North Pole to raise money to protect children from cruelty.

    - by jessica.ebbelaar
    Hi, my name is Luca. I joined Oracle in 2005 and I am currently working as a Dell EMEA Channel Manager UK, Ireland and Iberia and I am responsible for the Oracle Dell relationship for the above 3 countries. On the 31st of March 2011 I will set out to complete the ultimate challenge. I will walk and ski across the frozen Arctic to the Top of The World: the GEOGRAPHIC North Pole. While dragging all my supplies over 60 Nautical miles of moving sea ice, in temperatures as low as minus 30 degrees Celsius. I will spend 8 to 10 days preparing, working, living and travelling to the North Pole to 90 degree north. In November I spent a full week of training for this trip.( watch my video). This gave me the opportunity to meet the rest of the team, testing all the gear and carrying an 18inch tyre around the country side for 8 hours per day. I am honored to embark this challenging journey to support the National Society for the Prevention of Cruelty to Children (NSPCC). The NSPCC helped more than 750,000 young people to speak out for the first time about abuse they had suffered. I am a firm believer that in order to build a stronger, healthier and wiser society we need to support and help future generations from the beginning of their life journey. This is why cruelty to children must stop. FULL STOP.   Through Virgin Money Giving, you can sponsor me and donations will be quickly processed and passed to NSPCC. Virgin Money Giving is a non-profit organization and will claim gift aid on a charity's behalf where the donor is eligible for this. If you are a UK tax payer please don't forget to select Gift Aid. Gift Aid is great because it means charities get extra money added to their donations at no extra cost to the donor. For every £1 donated, the charity currently receives £1.28 when you add Gift Aid. Anyone who would like to find out more can visit my Facebook page ‘Luca North Pole charity fundraising trip’ I really appreciate all your support and thank you for supporting the NSPCC. Tags van Technorati: Channel Manager,challenge,Arctic,North Pole,NSPCC,cruelty to children,Luca North Pole charity fundraising trip. If fou have any questions related to this article contact [email protected].

    Read the article

  • Algorithm to Find the Aggregate Mass of "Granola Bar"-Like Structures?

    - by Stuart Robbins
    I'm a planetary science researcher and one project I'm working on is N-body simulations of Saturn's rings. The goal of this particular study is to watch as particles clump together under their own self-gravity and measure the aggregate mass of the clumps versus the mean velocity of all particles in the cell. We're trying to figure out if this can explain some observations made by the Cassini spacecraft during the Saturnian summer solstice when large structures were seen casting shadows on the nearly edge-on rings. Below is a screenshot of what any given timestep looks like. (Each particle is 2 m in diameter and the simulation cell itself is around 700 m across.) The code I'm using already spits out the mean velocity at every timestep. What I need to do is figure out a way to determine the mass of particles in the clumps and NOT the stray particles between them. I know every particle's position, mass, size, etc., but I don't know easily that, say, particles 30,000-40,000 along with 102,000-105,000 make up one strand that to the human eye is obvious. So, the algorithm I need to write would need to be a code with as few user-entered parameters as possible (for replicability and objectivity) that would go through all the particle positions, figure out what particles belong to clumps, and then calculate the mass. It would be great if it could do it for "each" clump/strand as opposed to everything over the cell, but I don't think I actually need it to separate them out. The only thing I was thinking of was doing some sort of N2 distance calculation where I'd calculate the distance between every particle and if, say, the closest 100 particles were within a certain distance, then that particle would be considered part of a cluster. But that seems pretty sloppy and I was hoping that you CS folks and programmers might know of a more elegant solution? Edited with My Solution: What I did was to take a sort of nearest-neighbor / cluster approach and do the quick-n-dirty N2 implementation first. So, take every particle, calculate distance to all other particles, and the threshold for in a cluster or not was whether there were N particles within d distance (two parameters that have to be set a priori, unfortunately, but as was said by some responses/comments, I wasn't going to get away with not having some of those). I then sped it up by not sorting distances but simply doing an order N search and increment a counter for the particles within d, and that sped stuff up by a factor of 6. Then I added a "stupid programmer's tree" (because I know next to nothing about tree codes). I divide up the simulation cell into a set number of grids (best results when grid size ˜7 d) where the main grid lines up with the cell, one grid is offset by half in x and y, and the other two are offset by 1/4 in ±x and ±y. The code then divides particles into the grids, then each particle N only has to have distances calculated to the other particles in that cell. Theoretically, if this were a real tree, I should get order N*log(N) as opposed to N2 speeds. I got somewhere between the two, where for a 50,000-particle sub-set I got a 17x increase in speed, and for a 150,000-particle cell, I got a 38x increase in speed. 12 seconds for the first, 53 seconds for the second, 460 seconds for a 500,000-particle cell. Those are comparable speeds to how long the code takes to run the simulation 1 timestep forward, so that's reasonable at this point. Oh -- and it's fully threaded, so it'll take as many processors as I can throw at it.

    Read the article

  • Isometric screen to 3D world coordinates efficiently

    - by Justin
    Been having a difficult time transforming 2D screen coordinates to 3D isometric space. This is the situation where I am working in 3D but I have an orthographic camera. Then my camera is positioned at (100, 200, 100), Where the xz plane is flat and y is up and down. I've been able to get a sort of working solution, but I feel like there must be a better way. Here's what I'm doing: With my camera at (0, 1, 0) I can translate my screen coordinates directly to 3D coordinates by doing: mouse2D.z = (( event.clientX / window.innerWidth ) * 2 - 1) * -(window.innerWidth /2); mouse2D.x = (( event.clientY / window.innerHeight) * 2 + 1) * -(window.innerHeight); mouse2D.y = 0; Everything okay so far. Now when I change my camera back to (100, 200, 100) my 3D space has been rotated 45 degrees around the y axis and then rotated about 54 degrees around a vector Q that runs along the xz plane at a 45 degree angle between the positive z axis and the negative x axis. So what I do to find the point is first rotate my point by 45 degrees using a matrix around the y axis. Now I'm close. So then I rotate my point around the vector Q. But my point is closer to the origin than it should be, since the Y value is not 0 anymore. What I want is that after the rotation my Y value is 0. So now I exchange my X and Z coordinates of my rotated vector with the X and Z coordinates of my non-rotated vector. So basically I have my old vector but it's y value is at an appropriate rotated amount. Now I use another matrix to rotate my point around the vector Q in the opposite direction, and I end up with the point where I clicked. Is there a better way? I feel like I must be missing something. Also my method isn't completely accurate. I feel like it's within 5-10 coordinates of where I click, maybe because of rounding from many calculations. Sorry for such a long question.

    Read the article

< Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >