Search Results

Search found 6730 results on 270 pages for 'loaded'.

Page 235/270 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • Simultaneously calling multiple methods on a WCF service from silverlight

    - by ola karlsson
    A while back I had to debug some performance issues in an existing Silverlight app, as the problem / solution was a bit obscure and finding info about it was quite tricky, I thought I’d share, maybe it can help the next person with this problem. The App On start, the app would do a number of calls to different methods on a WCF service, this to populate the UI with the necessary data. Recently one of those services had been changed and was now taking quite a bit longer than it used to. This was resulting in quite a long loading time for the whole UI, which was set up so it wouldn’t let the user interact with anything, until all the service calls had finished. First I broke out the longer running service call from the others, then removed the constraint that it had to be loaded for the UI in general to become responsive. I also added a loading indicator just on that area of the UI, thinking that the main UI would load while this particular section could keep loading independently. The Problem However this is where things started to get a bit strange. I found that even after these changes, the main UI wouldn’t activate until the long running call returned. So now, I did what I should have done to start with, I got Fiddler out and had a look at what was really happening. What I found was that, once the call to the long running service method was placed, all subsequent call were waiting for that one to return before executing. Not having really worked with WCF previously or knowing much about it in general, I was stumped… I knew of the issues where Silverlight is restricted by the browsers networking features in regards to number of simultaneous connections etc. However that just didn’t seem to be the issue here, you can clearly see in Fiddler that there’s numerous calls, but they’re just not returning. I thought of the problem maybe being in the WCF service, but the calls were really not that complicated and surely the service should be able to handle a lot more than what I was throwing at it! So I did what every developer does in this type of scenario, I hit the search engines. I did a whole bunch of searching on things like “multiple simultaneous WCF calls from Silverlight” and “Calling long running WCF services from Silverlight” etc. etc. This however, pretty much got me nowhere, I found a whole heap of resources on how to do WCF calls from Silverlight but most of them were very basic and of no use what so ever. The fog is clearing It wasn’t until I came across the term “ WCF blocking calls” and started incorporating that in my searches I started to get somewhere. Those searches quite quickly brought me to the following thread in the Silverlight forum “Long-running WCF call blocking subsequent calls” which discussed the exact problem I was facing and the best part, one of the guys there had the solution! The short answer is in the forum post and the guys answering, has also done a more extensive blog post about it called “Silverlight, WCF, and ASP.Net Configuration Gotchas” which covers it very well.  So come on what’s the solution?! I heard you ask, unless you’ve already gone to the links and looked it up ;) The Solution Well, it turns out that the issue is founded in a mix of Silverlight, Asp.Net and WCF, basically if you’re doing multiple calls to a single WCF web-service and you have Asp.Net session state enabled, the calls will be executed sequentially by the service, hence any long running calls will block subsequent ones. So why is Asp.Net session state effecting us, we’re working in Silverlight, right? We'll as mentioned earlier, by default Silverlight uses the browsers networking stack when doing service calls, hence to the WCF service, the call looks like it might as well be coming from a normal Asp.Net. To get around this, we look to a feature introduced in Silverlight 3, namely the Client HTTP Stack. The Client HTTP Stack to the rescue By using the following syntax (for example in our App.xaml.cs, Application_Startup method) WebRequest.RegisterPrefix("http://", WebRequestCreator.ClientHttp); we can set our Silverlight application to use the Client HTTP Stack, which incidentally solves our problem! By using Silverlights own networking stack, rather than that of the browser, we get around the Asp.Net - WCF session state issue. The above code specifies that all calls to addresses starting with “http://” should go through the client stack, this can actually be set more granular and you can specify it to be used only for certain domains etc. Summary The actual solution is well covered in the forum and blog posts I link to above. This post is more about sharing my experience, hopefully helping to spread the word about this and maybe make it a bit easier for the next poor guy with this issue to find the solution. Until next time, Ola

    Read the article

  • The JRockit Performance Counters

    - by Marcus Hirt
    Every now and then I get a question regarding what the attributes in the PerfCounters dynamic MBean represent. Now, all the MBeans under the oracle.jrockit.management (bea.jrockit.management pre R28) domain are part of what we call JMXMAPI (the JRockit JMX based Management API), which is unsupported. Therefore there is no official documentation for the API. I did however write a bit about JMXMAPI in my recent JRockit book, Oracle JRockit: The Definitive Guide. The information in the table below is from that book: Counter Description java.cls.loadedClasses The number of classes loaded since the start of the JVM. java.cls.unloadedClasses The number of classes unloaded since the start of the JVM. java.property.java.class.path The class path of the JVM. java.property.java.endorsed.dirs The endorsed dirs. See the Endorsed Standards Override Mechanism. java.property.java.ext.dirs The ext dirs, which are searched for jars that should be automatically put on the classpath. See the Java documentation for java.ext.dirs. java.property.java.home The root of the JDK or JRE installation. java.property.java.library.path The library path used to find user libraries. java.property.java.vm.version The JRockit version. java.rt.vmArgs The list of VM arguments. java.threads.daemon The number of running daemon threads. java.threads.live The total number of running threads. java.threads.livePeak The peak number of threads that has been running since JRockit was started. java.threads.nonDaemon The number of non-daemon threads running. java.threads.started The total number of threads started since the start of JRockit. jrockit.gc.latest.heapSize The current heap size in bytes. jrockit.gc.latest.nurserySize The current nursery size in bytes. jrockit.gc.latest.oc.compaction.time How long, in ticks, the last compaction lasted. Reset to 0 if compaction is skipped. jrockit.gc.latest.oc.heapUsedAfter Used heap at the end of the last OC, in bytes. jrockit.gc.latest.oc.heapUsedBefore Used heap at the start of the last OC, in bytes. jrockit.gc.latest.oc.number The number of OCs that have occurred so far. jrockit.gc.latest.oc.sumOfPauses The paused time for the last OC, in ticks. jrockit.gc.latest.oc.time The time the last OC took, in ticks. jrockit.gc.latest.yc.sumOfPauses The paused time for the last YC, in ticks. jrockit.gc.latest.yc.time The time the last YC took, in ticks. jrockit.gc.max.oc.individualPause The longest OC pause so far, in ticks. jrockit.gc.max.yc.individualPause The longest YC pause so far, in ticks. jrockit.gc.total.oc.compaction.externalAborted Number of aborted external compactions so far. jrockit.gc.total.oc.compaction.internalAborted Number of aborted internal compactions so far. jrockit.gc.total.oc.compaction.internalSkipped Number of skipped internal compactions so far. jrockit.gc.total.oc.compaction.time The total time spent doing compaction so far, in ticks. jrockit.gc.total.oc.ompaction.externalSkipped Number of skipped external compactions so far. jrockit.gc.total.oc.pauseTime The sum of all OC pause times so far, in ticks. jrockit.gc.total.oc.time The total time spent doing OC so far, in ticks. jrockit.gc.total.pageFaults The number of page faults that have occurred during GC so far. jrockit.gc.total.yc.pauseTime The sum of all YC pause times, in ticks. jrockit.gc.total.yc.promotedObjects The number of objects that all YCs have promoted. jrockit.gc.total.yc.promotedSize The total number of bytes that all YCs have promoted, in bytes. jrockit.gc.total.yc.time The total time spent doing YC, in ticks. oracle.ci.jit.count The number of methods JIT compiled. oracle.ci.jit.timeTotal The total time spent JIT compiling, in ticks. oracle.ci.opt.count The number of methods optimized. oracle.ci.opt.timeTotal The total time spent optimizing, in ticks. oracle.rt.counterFrequency Used to convert ticks values to seconds. Note that many of these counters are excellent choices for attributes to plot in the Management Console. Also note that many values are in ticks – to convert them to seconds, divide by the value in the oracle.rt.counterFrequency counter.

    Read the article

  • Data Mining Resources

    - by Dejan Sarka
    There are many different types of analyses, each one with its own pros and cons. Relational reports have a predefined structure, and end users cannot change it. They are simple to use for end users. Reports can use real-time data and snapshots of data to show the state of a report at specific points in time. One of the drawbacks is that report authoring is limited to IT pros and advanced users. Any kind of dynamic restructuring is very limited. If real-time data is used for a report, the report has a negative impact on the performance of the source system. Processing of the reports might be slow because the data comes from relational database management systems, which are not optimized for reporting only. If you create a semantic model of your data, your end users can create ad-hoc report structures. However, the development is more complex because a developer is needed to create these semantic models. For OLAP, you typically use specialized database management systems. You get lightning speed of analyses. End users can use rich and thin clients to interactively change the structure of the report. Typically, they do it graphically. However, the development of an OLAP system is many times quite complex. It involves the preparation and maintenance of an enterprise data warehouse and OLAP cubes. In order to exploit the possibility of real-time restructuring of reports, the users must be both active and educated. The data is usually stale, as it is loaded into data warehouses and OLAP cubes with a scheduled process. With data mining, a structure is not selected in advance; it searches for the structure. As a result, data mining can give you the most valuable results because you can discover patterns you did not expect. A data mining model structure is limited only by the attributes that you use to train the model. One of the drawbacks is that a lot of knowledge is needed for a successful data mining project. End users have to understand the results. Subject matter experts and IT professionals need to understand business problem thoroughly. The development might be sometimes even more complex than the development of OLAP cubes. Each type of analysis has its own place in an enterprise system. SQL Server has tools for all kinds of analyses. However, data mining is the most advanced way of analyzing the data; this is the “I” in BI. In order to get the most out of it, you need to learn quite a lot. In this blog post, I am gathering together resources for learning, including forthcoming events. Books Multiple authors: SQL Server MVP Deep Dives – I wrote an introductory data mining chapter there. Erik Veerman, Teo Lachev and Dejan Sarka: MCTS Self-Paced Training Kit (Exam 70-448): Microsoft SQL Server 2008 - Business Intelligence Development and Maintenance – you can find a good overview of a complete BI solution, including data mining, in this book. Jamie MacLennan, ZhaoHui Tang, and Bogdan Crivat: Data Mining with Microsoft SQL Server 2008 – can’t miss this book if you want to mine your data with SQL Server tools. Michael Berry, Gordon Linoff: Mastering Data Mining: The Art and Science of Customer Relationship Management – data mining from both, business and technical perspective. Dorian Pyle: Data Preparation for Data Mining – an in-depth book about data preparation. Thomas and Ronald Wonnacott: Introductory Statistics – if you thought that you could get away without statistics, then you are not serious about data mining. Jiawei Han and Micheline Kamber: Data Mining Concepts and Techniques – in-depth explanation of the most popular data mining algorithms. Michael Berry and Gordon Linoff: Data Mining Techniques – another book that explains data mining algorithms, more fro a business perspective. Paolo Guidici: Applied Data Mining – very mathematical book, only if you enjoy statistics and mathematics in general. Forthcoming presentations I am presenting two data mining related sessions during the PASS Summit in Charlotte, NC: Wednesday, October 16th, 2013 - Fraud Detection: Notes from the Field – I am showing how to use data mining for a specific business problem. The presentation is based on real-life projects. Friday, October 18th: Excel 2013 Advanced Analytics – I am focusing on Excel Data Mining Add-ins, and how to use them together with Power Pivot and other add-ins. This is the most you can get out of Excel. Sinergija 2013, Belgrade, Serbia Tuesday, October 22nd: Excel 2013 Analytics to the Max – another presentation focusing on the most advanced analytics you can get in Excel. SQL Rally Amsterdam, Netherlands Thursday, November 7th: Advanced Analytics in Excel 2013 – and again I am presenting about data mining in Excel. Why three different titles for the same presentation? I don’t know, I guess I forgot the name I proposed every time right after I sent the proposal. Courses Data Mining with SQL Server 2012 – I wrote a 3-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. OK, now you know: no more excuses, start learning data mining, get the most out of your data

    Read the article

  • CLR via C# 3rd Edition is out

    - by Abhijeet Patel
    Time for some book news update. CLR via C#, 3rd Edition seems to have been out for a little while now. The book was released in early Feb this year, and needless to say my copy is on it’s way. I can barely wait to dig in and chew on the goodies that one of the best technical authors and software professionals I respect has in store. The 2nd edition of the book was an absolute treat and this edition promises to be no less. Here is a brief description of what’s new and updated from the 2nd edition. Part I – CLR Basics Chapter 1-The CLR’s Execution Model Added about discussion about C#’s /optimize and /debug switches and how they relate to each other. Chapter 2-Building, Packaging, Deploying, and Administering Applications and Types Improved discussion about Win32 manifest information and version resource information. Chapter 3-Shared Assemblies and Strongly Named Assemblies Added discussion of TypeForwardedToAttribute and TypeForwardedFromAttribute. Part II – Designing Types Chapter 4-Type Fundamentals No new topics. Chapter 5-Primitive, Reference, and Value Types Enhanced discussion of checked and unchecked code and added discussion of new BigInteger type. Also added discussion of C# 4.0’s dynamic primitive type. Chapter 6-Type and Member Basics No new topics. Chapter 7-Constants and Fields No new topics. Chapter 8-Methods Added discussion of extension methods and partial methods. Chapter 9-Parameters Added discussion of optional/named parameters and implicitly-typed local variables. Chapter 10-Properties Added discussion of automatically-implemented properties, properties and the Visual Studio debugger, object and collection initializers, anonymous types, the System.Tuple type and the ExpandoObject type. Chapter 11-Events Added discussion of events and thread-safety as well as showing a cool extension method to simplify the raising of an event. Chapter 12-Generics Added discussion of delegate and interface generic type argument variance. Chapter 13-Interfaces No new topics. Part III – Essential Types Chapter 14-Chars, Strings, and Working with Text No new topics. Chapter 15-Enums Added coverage of new Enum and Type methods to access enumerated type instances. Chapter 16-Arrays Added new section on initializing array elements. Chapter 17-Delegates Added discussion of using generic delegates to avoid defining new delegate types. Also added discussion of lambda expressions. Chapter 18-Attributes No new topics. Chapter 19-Nullable Value Types Added discussion on performance. Part IV – CLR Facilities Chapter 20-Exception Handling and State Management This chapter has been completely rewritten. It is now about exception handling and state management. It includes discussions of code contracts and constrained execution regions (CERs). It also includes a new section on trade-offs between writing productive code and reliable code. Chapter 21-Automatic Memory Management Added discussion of C#’s fixed state and how it works to pin objects in the heap. Rewrote the code for weak delegates so you can use them with any class that exposes an event (the class doesn’t have to support weak delegates itself). Added discussion on the new ConditionalWeakTable class, GC Collection modes, Full GC notifications, garbage collection modes and latency modes. I also include a new sample showing how your application can receive notifications whenever Generation 0 or 2 collections occur. Chapter 22-CLR Hosting and AppDomains Added discussion of side-by-side support allowing multiple CLRs to be loaded in a single process. Added section on the performance of using MarshalByRefObject-derived types. Substantially rewrote the section on cross-AppDomain communication. Added section on AppDomain Monitoring and first chance exception notifications. Updated the section on the AppDomainManager class. Chapter 23-Assembly Loading and Reflection Added section on how to deploy a single file with dependent assemblies embedded inside it. Added section comparing reflection invoke vs bind/invoke vs bind/create delegate/invoke vs C#’s dynamic type. Chapter 24-Runtime Serialization This is a whole new chapter that was not in the 2nd Edition. Part V – Threading Chapter 25-Threading Basics Whole new chapter motivating why Windows supports threads, thread overhead, CPU trends, NUMA Architectures, the relationship between CLR threads and Windows threads, the Thread class, reasons to use threads, thread scheduling and priorities, foreground thread vs background threads. Chapter 26-Performing Compute-Bound Asynchronous Operations Whole new chapter explaining the CLR’s thread pool. This chapter covers all the new .NET 4.0 constructs including cooperative cancelation, Tasks, the aralle class, parallel language integrated query, timers, how the thread pool manages its threads, cache lines and false sharing. Chapter 27-Performing I/O-Bound Asynchronous Operations Whole new chapter explaining how Windows performs synchronous and asynchronous I/O operations. Then, I go into the CLR’s Asynchronous Programming Model, my AsyncEnumerator class, the APM and exceptions, Applications and their threading models, implementing a service asynchronously, the APM and Compute-bound operations, APM considerations, I/O request priorities, converting the APM to a Task, the event-based Asynchronous Pattern, programming model soup. Chapter 28-Primitive Thread Synchronization Constructs Whole new chapter discusses class libraries and thread safety, primitive user-mode, kernel-mode constructs, and data alignment. Chapter 29-Hybrid Thread Synchronization Constructs Whole new chapter discussion various hybrid constructs such as ManualResetEventSlim, SemaphoreSlim, CountdownEvent, Barrier, ReaderWriterLock(Slim), OneManyResourceLock, Monitor, 3 ways to solve the double-check locking technique, .NET 4.0’s Lazy and LazyInitializer classes, the condition variable pattern, .NET 4.0’s concurrent collection classes, the ReaderWriterGate and SyncGate classes.

    Read the article

  • Get your content off Blogger.com

    - by Daniel Moth
    Due to blogger.com deprecating FTP users I've decided to move my blog. When I think of the content of a blog, 4 items come to mind: blog posts, comments, binary files that the blog posts linked to (e.g. images, ZIP files) and the CSS+structure of the blog. 1. Binaries The binary files you used in your blog posts are sitting on your own web space, so really blogger.com is not involved with that. Nothing for you to do at this stage, I'll come back to these in another post. 2. CSS and structure In the best case this exists as a separate CSS file on your web space (so no action for now) or in a worst case, like me, your CSS is embedded with the HTML. In the latter case, simply navigate from you dashboard to "Template" then "Edit HTML" and copy paste the contents of the box. Save that locally in a txt file and we'll come back to that in another post. 3. Blog posts and Comments The blog posts and comments exist in all the HTML files on your own web space. Parsing HTML files to extract that can be painful, so it is easier to download the XML files from blogger's servers that contain all your blog posts and comments. 3.1 Single XML file, but incomplete The obvious thing to do is go into your dashboard "Settings" and under the "Basic" tab look at the top next to "Blog Tools". There is a link there to "Export blog" which downloads an XML file with both comments and posts. The problem with that is that it only contains 200 comments - if you have more than that, you will lose the surplus. Also, this XML file has a lot of noise, compared to the better solution described next. (note that a tool I will refer to in a future post deals with either kind of XML file) 3.2 Multiple XML files First you need to find your blog ID. In case you don't know what that is, navigate to the "Template" as described in section 2 above. You will find references to the blog id in the HTML there, but you can also see it as part of the URL in your browser: blogger.com/template-edit.g?blogID=YOUR_NUMERIC_ID. Mine is 7 digits. You can now navigate to these URLs to download the XML for your posts and comments respectively: blogger.com/feeds/YOUR_NUMERIC_ID/posts/default?max-results=500&start-index=1 blogger.com/feeds/YOUR_NUMERIC_ID/comments/default?max-results=200&start-index=1 Note that you can only get 500 posts at a time and only 200 comments at a time. To get more than that you have to change the URL and download the next batch. To get you started, to get the XML for the next 500 posts and next 200 comments respectively you’d have to use these URLs: blogger.com/feeds/YOUR_NUMERIC_ID/posts/default?max-results=500&start-index=501 blogger.com/feeds/YOUR_NUMERIC_ID/comments/default?max-results=200&start-index=201 ...and so on and so forth. Keep all the XML files in the same folder on your local machine (with nothing else in there). 4. Validating the XML aka editing older blog posts The XML files you just downloaded really contain HTML fragments inside for all your blog posts. If you are like me, your blog posts did not conform to XHTML so passing them to an XML parser (which is what we will want to do) will result in the XML parser choking. So the next step is to fix that. This can be no work at all for you, or a huge time sink or just a couple hours of pain (which was my case). The process I followed was to attempt to load the XML files using XmlDocument.Load and wait for the exception to be thrown from my code. The exception would point to the exact offending line and column which would help me fix the issue. Rather than fix it in the XML itself, I would go back and edit the offending blog post and fix it there - recommended! Then I'd repeat the cycle until the XML could be loaded in the XmlDocument. To give you an idea, some of the issues I encountered are: extra or missing quotes in img and href elements, direct usage of chevrons instead of encoding them as &lt;, missing closing tags, mismatched nested pairs of elements and capitalization of html elements. For a full list of things that may go wrong see this. 5. Opportunity for other changes I also found a few posts that did not have a category assigned so I fixed those too. I took the further opportunity to create new categories and tag some of my blog posts with that. Note that I did not remove/change categories of existing posts, but only added.   In an another post we'll see how to use the XML files you stored in the local folder… Comments about this post welcome at the original blog.

    Read the article

  • Getting Started with Prism (aka Composite Application Guidance for WPF and Silverlight)

    - by dotneteer
    Overview Prism is a framework from the Microsoft Patterns and Practice team that allow you to create WPF and Silverlight in a modular way. It is especially valuable for larger projects in which a large number of developers can develop in parallel. Prism achieves its goal by supplying several services: · Dependency Injection (DI) and Inversion of control (IoC): By using DI, Prism takes away the responsibility of instantiating and managing the life time of dependency objects from individual components to a container. Prism relies on containers to discover, manage and compose large number of objects. By varying the configuration, the container can also inject mock objects for unit testing. Out of the box, Prism supports Unity and MEF as container although it is possible to use other containers by subclassing the Bootstrapper class. · Modularity and Region: Prism supplies the framework to split application into modules from the application shell. Each module is a library project that contains both UI and code and is responsible to initialize itself when loaded by the shell. Each window can be further divided into regions. A region is a user control with associated model. · Model, view and view-model (MVVM) pattern: Prism promotes the user MVVM. The use of DI container makes it much easier to inject model into view. WPF already has excellent data binding and commanding mechanism. To be productive with Prism, it is important to understand WPF data binding and commanding well. · Event-aggregation: Prism promotes loosely coupled components. Prism discourages for components from different modules to communicate each other, thus leading to dependency. Instead, Prism supplies an event-aggregation mechanism that allows components to publish and subscribe events without knowing each other. Architecture In the following, I will go into a little more detail on the services provided by Prism. Bootstrapper In a typical WPF application, application start-up is controls by App.xaml and its code behind. The main window of the application is typically specified in the App.xaml file. In a Prism application, we start a bootstrapper in the App class and delegate the duty of main window to the bootstrapper. The bootstrapper will start a dependency-injection container so all future object instantiations are managed by the container. Out of box, Prism provides the UnityBootstrapper and MefUnityBootstrapper abstract classes. All application needs to either provide a concrete implementation of one of these bootstrappers, or alternatively, subclass the Bootstrapper class with another DI container. A concrete bootstrapper class must implement the CreateShell method. Its responsibility is to resolve and create the Shell object through the DI container to serve as the main window for the application. The other important method to override is ConfigureModuleCatalog. The bootstrapper can register modules for the application. In a more advance scenario, an application does not have to know all its modules at compile time. Modules can be discovered at run time. Readers to refer to one of the Open Modularity Quick Starts for more information. Modules Once modules are registered with or discovered by Prism, they are instantiated by the DI container and their Initialize method is called. The DI container can inject into a module a region registry that implements IRegionViewRegistry interface. The module, in its Initialize method, can then call RegisterViewWithRegion method of the registry to register its regions. Regions Regions, once registered, are managed by the RegionManager. The shell can then load regions either through the RegionManager.RegionName attached property or dynamically through code. When a view is created by the region manager, the DI container can inject view model and other services into the view. The view then has a reference to the view model through which it can interact with backend services. Service locator Although it is possible to inject services into dependent classes through a DI container, an alternative way is to use the ServiceLocator to retrieve a service on demard. Prism supplies a service locator implementation and it is possible to get an instance of the service by calling: ServiceLocator.Current.GetInstance<IServiceType>() Event aggregator Prism supplies an IEventAggregator interface and implementation that can be injected into any class that needs to communicate with each other in a loosely-coupled fashion. The event aggregator uses a publisher/subscriber model. A class can publishes an event by calling eventAggregator.GetEvent<EventType>().Publish(parameter) to raise an event. Other classes can subscribe the event by calling eventAggregator.GetEvent<EventType>().Subscribe(EventHandler, other options). Getting started The easiest way to get started with Prism is to go through the Prism Hands-On labs and look at the Hello World QuickStart. The Hello World QuickStart shows how bootstrapper, modules and region works. Next, I would recommend you to look at the Stock Trader Reference Implementation. It is a more in depth example that resemble we want to set up an application. Several other QuickStarts cover individual Prism services. Some scenarios, such as dynamic module discovery, are more advanced. Apart from the official prism document, you can get an overview by reading Glen Block’s MSDN Magazine article. I have found the best free training material is from the Boise Code Camp. To be effective with Prism, it is important to understands key concepts of WPF well first, such as the DependencyProperty system, data binding, resource, theme and ICommand. It is also important to know your DI container of choice well. I will try to explorer these subjects in depth in the future. Testimony Recently, I worked on a desktop WPF application using Prism. I had a wonderful experience with Prism. The Prism is flexible enough even in the presence of third party controls such as Telerik WPF controls. We have never encountered any significant obstacle.

    Read the article

  • Super Joybox 5 HID 0925:8884 not recognized as joystick in Ubuntu 12.04 LTS

    - by Tim Evans
    Problem: When using the "Super JoyBox 5" 4 port playstation 2 to USB adapter, the device is not recognized as a joystick. there is no js0 created, but instead another input eventX and mouseX are created in /dev/input. When using the directional buttons (up down left right) on a Playstation 1 controller attached to the device, the mouse cursor moves to the top, bottom, left, and right edges of the screen respectively. Buttons are unresponsive. The joypads attached to the device cannot be used in any games or other programs. Attempted remedies: Creating a symlink from the eventX to js0 does not solve the problem. Addl Info: joydev is loaded and running peroperly according to LSMOD. evtest can be run on the created eventX (sudo evtest /dev/input/event14 in my case) and the buttons and axes all register inputs. Here is a paste of EVTEST's diagnostic and the first couple button events. [code] sudo evtest /dev/input/event14 Input driver version is 1.0.1 Input device ID: bus 0x3 vendor 0x925 product 0x8884 version 0x100 Input device name: "HID 0925:8884" Supported events: Event type 0 (EV_SYN) Event type 1 (EV_KEY) Event code 288 (BTN_TRIGGER) Event code 289 (BTN_THUMB) Event code 290 (BTN_THUMB2) Event code 291 (BTN_TOP) Event code 292 (BTN_TOP2) Event code 293 (BTN_PINKIE) Event code 294 (BTN_BASE) Event code 295 (BTN_BASE2) Event code 296 (BTN_BASE3) Event code 297 (BTN_BASE4) Event code 298 (BTN_BASE5) Event code 299 (BTN_BASE6) Event code 300 (?) Event code 301 (?) Event code 302 (?) Event code 303 (BTN_DEAD) Event code 304 (BTN_A) Event code 305 (BTN_B) Event code 306 (BTN_C) Event code 307 (BTN_X) Event code 308 (BTN_Y) Event code 309 (BTN_Z) Event code 310 (BTN_TL) Event code 311 (BTN_TR) Event code 312 (BTN_TL2) Event code 313 (BTN_TR2) Event code 314 (BTN_SELECT) Event code 315 (BTN_START) Event code 316 (BTN_MODE) Event code 317 (BTN_THUMBL) Event code 318 (BTN_THUMBR) Event code 319 (?) Event code 320 (BTN_TOOL_PEN) Event code 321 (BTN_TOOL_RUBBER) Event code 322 (BTN_TOOL_BRUSH) Event code 323 (BTN_TOOL_PENCIL) Event code 324 (BTN_TOOL_AIRBRUSH) Event code 325 (BTN_TOOL_FINGER) Event code 326 (BTN_TOOL_MOUSE) Event code 327 (BTN_TOOL_LENS) Event code 328 (?) Event code 329 (?) Event code 330 (BTN_TOUCH) Event code 331 (BTN_STYLUS) Event code 332 (BTN_STYLUS2) Event code 333 (BTN_TOOL_DOUBLETAP) Event code 334 (BTN_TOOL_TRIPLETAP) Event code 335 (BTN_TOOL_QUADTAP) Event type 3 (EV_ABS) Event code 0 (ABS_X) Value 127 Min 0 Max 255 Flat 15 Event code 1 (ABS_Y) Value 127 Min 0 Max 255 Flat 15 Event code 2 (ABS_Z) Value 127 Min 0 Max 255 Flat 15 Event code 3 (ABS_RX) Value 127 Min 0 Max 255 Flat 15 Event code 4 (ABS_RY) Value 127 Min 0 Max 255 Flat 15 Event code 5 (ABS_RZ) Value 127 Min 0 Max 255 Flat 15 Event code 6 (ABS_THROTTLE) Value 127 Min 0 Max 255 Flat 15 Event code 7 (ABS_RUDDER) Value 127 Min 0 Max 255 Flat 15 Event code 8 (ABS_WHEEL) Value 127 Min 0 Max 255 Flat 15 Event code 9 (ABS_GAS) Value 127 Min 0 Max 255 Flat 15 Event code 10 (ABS_BRAKE) Value 127 Min 0 Max 255 Flat 15 Event code 11 (?) Value 127 Min 0 Max 255 Flat 15 Event code 12 (?) Value 127 Min 0 Max 255 Flat 15 Event code 13 (?) Value 127 Min 0 Max 255 Flat 15 Event code 14 (?) Value 127 Min 0 Max 255 Flat 15 Event code 15 (?) Value 127 Min 0 Max 255 Flat 15 Event code 16 (ABS_HAT0X) Value 0 Min -1 Max 1 Event code 17 (ABS_HAT0Y) Value 0 Min -1 Max 1 Event code 18 (ABS_HAT1X) Value 0 Min -1 Max 1 Event code 19 (ABS_HAT1Y) Value 0 Min -1 Max 1 Event code 20 (ABS_HAT2X) Value 0 Min -1 Max 1 Event code 21 (ABS_HAT2Y) Value 0 Min -1 Max 1 Event code 22 (ABS_HAT3X) Value 0 Min -1 Max 1 Event code 23 (ABS_HAT3Y) Value 0 Min -1 Max 1 Event type 4 (EV_MSC) Event code 4 (MSC_SCAN) Testing ... (interrupt to exit) Event: time 1351223176.126127, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90001 Event: time 1351223176.126130, type 1 (EV_KEY), code 288 (BTN_TRIGGER), value 1 Event: time 1351223176.126166, -------------- SYN_REPORT ------------ Event: time 1351223178.238127, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90001 Event: time 1351223178.238130, type 1 (EV_KEY), code 288 (BTN_TRIGGER), value 0 Event: time 1351223178.238167, -------------- SYN_REPORT ------------ Event: time 1351223180.422127, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90002 Event: time 1351223180.422129, type 1 (EV_KEY), code 289 (BTN_THUMB), value 1 Event: time 1351223180.422163, -------------- SYN_REPORT ------------ Event: time 1351223181.558099, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90002 Event: time 1351223181.558102, type 1 (EV_KEY), code 289 (BTN_THUMB), value 0 Event: time 1351223181.558137, -------------- SYN_REPORT ------------ Event: time 1351223182.486137, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90003 Event: time 1351223182.486140, type 1 (EV_KEY), code 290 (BTN_THUMB2), value 1 Event: time 1351223182.486172, -------------- SYN_REPORT ------------ Event: time 1351223183.302130, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90003 Event: time 1351223183.302132, type 1 (EV_KEY), code 290 (BTN_THUMB2), value 0 Event: time 1351223183.302165, -------------- SYN_REPORT ------------ Event: time 1351223184.030133, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90004 Event: time 1351223184.030136, type 1 (EV_KEY), code 291 (BTN_TOP), value 1 Event: time 1351223184.030166, -------------- SYN_REPORT ------------ Event: time 1351223184.558135, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90004 Event: time 1351223184.558138, type 1 (EV_KEY), code 291 (BTN_TOP), value 0 Event: time 1351223184.558168, -------------- SYN_REPORT ------------ [/code] The directional buttons on the pad are being identified as HAT0Y and HAT0X axes, thats zero, not the letter O. Aparently, this device used to work flawlessly on kernel 2.4.x systems, and even as late as ubunto 10.04. Perhaps the Joydev rules for identifying joypads has changed? Currently, this kind of bug is affecting a few different type of controller adapters, but since this is the one that i PERSONALLY have (and has been driving me my own special brand of crazy), its the one im documenting. What i think should be happening instead: The device should be registering js0 through js3, one for each port, or JS0 that will handle all of the connected devices with different numbered axes for each connected joypad. Either way, it should work as a joystick and stop controlling the mouse cursor. Please help!

    Read the article

  • Unable to sign in. How to debug?

    - by Dmitriy Budnik
    I had to reboot system with reset button. After reboot I can't sign in. When I enter my password It seems like X-server just restarts. I can sing in as guest and also I can sign in in text TTY. Here is first 150 lines of my lightdm.log: [+0.04s] DEBUG: Logging to /var/log/lightdm/lightdm.log [+0.04s] DEBUG: Starting Light Display Manager 1.2.1, UID=0 PID=1070 [+0.04s] DEBUG: Loaded configuration from /etc/lightdm/lightdm.conf [+0.04s] DEBUG: Using D-Bus name org.freedesktop.DisplayManager [+0.04s] DEBUG: Registered seat module xlocal [+0.04s] DEBUG: Registered seat module xremote [+0.04s] DEBUG: Adding default seat [+0.04s] DEBUG: Starting seat [+0.04s] DEBUG: Starting new display for automatic login as user dmytro [+0.04s] DEBUG: Starting local X display [+3.64s] DEBUG: X server :0 will replace Plymouth [+3.66s] DEBUG: Using VT 7 [+3.66s] DEBUG: Activating VT 7 [+3.66s] DEBUG: Logging to /var/log/lightdm/x-0.log [+3.66s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0 [+3.66s] DEBUG: Launching X Server [+3.66s] DEBUG: Launching process 1154: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch -background none [+3.66s] DEBUG: Waiting for ready signal from X server :0 [+3.66s] DEBUG: Acquired bus name org.freedesktop.DisplayManager [+3.66s] DEBUG: Registering seat with bus path /org/freedesktop/DisplayManager/Seat0 [+10.78s] DEBUG: Got signal 10 from process 1154 [+10.78s] DEBUG: Got signal from X server :0 [+10.78s] DEBUG: Stopping Plymouth, X server is ready [+10.80s] DEBUG: Connecting to XServer :0 [+10.80s] DEBUG: Automatically logging in user dmytro [+10.80s] DEBUG: Started session 1303 with service 'lightdm-autologin', username 'dmytro' [+13.22s] DEBUG: Session 1303 authentication complete with return value 0: Success [+13.26s] DEBUG: Autologin user dmytro authorized [+13.27s] DEBUG: Autologin using session ubuntu [+14.44s] DEBUG: Dropping privileges to uid 1000 [+14.48s] DEBUG: Restoring privileges [+14.49s] DEBUG: Dropping privileges to uid 1000 [+14.49s] DEBUG: Writing /home/dmytro/.dmrc [+14.61s] DEBUG: Restoring privileges [+14.81s] DEBUG: Starting session ubuntu as user dmytro [+14.81s] DEBUG: Session 1303 running command /usr/sbin/lightdm-session gnome-session --session=ubuntu [+15.76s] DEBUG: New display ready, switching to it [+15.76s] DEBUG: Activating VT 7 [+15.76s] DEBUG: Registering session with bus path /org/freedesktop/DisplayManager/Session0 [+16.63s] DEBUG: Session 1303 exited with return value 0 [+16.63s] DEBUG: User session quit [+16.63s] DEBUG: Stopping display [+16.63s] DEBUG: Sending signal 15 to process 1154 [+17.19s] DEBUG: Process 1154 exited with return value 0 [+17.19s] DEBUG: X server stopped [+17.19s] DEBUG: Removing X server authority /var/run/lightdm/root/:0 [+17.19s] DEBUG: Releasing VT 7 [+17.19s] DEBUG: Display server stopped [+17.19s] DEBUG: Display stopped [+17.19s] DEBUG: Active display stopped, switching to greeter [+17.19s] DEBUG: Switching to greeter [+17.19s] DEBUG: Starting new display for greeter [+17.19s] DEBUG: Starting local X display [+17.19s] DEBUG: Using VT 7 [+17.19s] DEBUG: Logging to /var/log/lightdm/x-0.log [+17.19s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0 [+17.19s] DEBUG: Launching X Server [+17.19s] DEBUG: Launching process 1563: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch [+17.19s] DEBUG: Waiting for ready signal from X server :0 [+17.48s] DEBUG: Got signal 10 from process 1563 [+17.48s] DEBUG: Got signal from X server :0 [+17.48s] DEBUG: Connecting to XServer :0 [+17.48s] DEBUG: Starting greeter [+17.48s] DEBUG: Started session 1575 with service 'lightdm', username 'lightdm' [+17.61s] DEBUG: Session 1575 authentication complete with return value 0: Success [+17.61s] DEBUG: Greeter authorized [+17.61s] DEBUG: Logging to /var/log/lightdm/x-0-greeter.log [+17.68s] DEBUG: Session 1575 running command /usr/lib/lightdm/lightdm-greeter-session /usr/sbin/unity-greeter [+20.86s] DEBUG: Greeter connected version=1.2.1 [+20.86s] DEBUG: Greeter connected, display is ready [+20.86s] DEBUG: New display ready, switching to it [+20.86s] DEBUG: Activating VT 7 [+20.86s] DEBUG: Stopping greeter display being switched from [+24.90s] DEBUG: Greeter start authentication for dmytro [+24.90s] DEBUG: Started session 1746 with service 'lightdm', username 'dmytro' [+25.10s] DEBUG: Session 1746 got 1 message(s) from PAM [+25.10s] DEBUG: Prompt greeter with 1 message(s) [+31.87s] DEBUG: Continue authentication [+33.75s] DEBUG: Session 1746 authentication complete with return value 7: Authentication failure [+33.75s] DEBUG: Authenticate result for user dmytro: Authentication failure [+33.75s] DEBUG: Greeter start authentication for dmytro [+33.75s] DEBUG: Session 1746: Sending SIGTERM [+33.75s] DEBUG: Started session 2264 with service 'lightdm', username 'dmytro' [+33.75s] DEBUG: Session 2264 got 1 message(s) from PAM [+33.75s] DEBUG: Prompt greeter with 1 message(s) [+36.41s] DEBUG: Continue authentication [+36.53s] DEBUG: Session 2264 authentication complete with return value 0: Success [+36.53s] DEBUG: Authenticate result for user dmytro: Success [+36.54s] DEBUG: User dmytro authorized [+36.54s] DEBUG: Greeter requests session ubuntu [+36.54s] DEBUG: Using session ubuntu [+36.54s] DEBUG: Stopping greeter [+36.54s] DEBUG: Session 1575: Sending SIGTERM [+37.41s] DEBUG: Greeter closed communication channel [+37.41s] DEBUG: Session 1575 exited with return value 0 [+37.41s] DEBUG: Greeter quit [+37.42s] DEBUG: Dropping privileges to uid 1000 [+37.42s] DEBUG: Restoring privileges [+37.43s] DEBUG: Dropping privileges to uid 1000 [+37.43s] DEBUG: Writing /home/dmytro/.dmrc [+38.35s] DEBUG: Restoring privileges [+40.37s] DEBUG: Starting session ubuntu as user dmytro [+40.37s] DEBUG: Session 2264 running command /usr/sbin/lightdm-session gnome-session --session=ubuntu [+40.39s] DEBUG: Registering session with bus path /org/freedesktop/DisplayManager/Session1 [+50.78s] DEBUG: Session 2264 exited with return value 0 [+50.78s] DEBUG: User session quit [+50.78s] DEBUG: Stopping display [+50.78s] DEBUG: Sending signal 15 to process 1563 [+51.53s] DEBUG: Process 1563 exited with return value 0 [+51.53s] DEBUG: X server stopped [+51.53s] DEBUG: Removing X server authority /var/run/lightdm/root/:0 [+51.53s] DEBUG: Releasing VT 7 [+51.53s] DEBUG: Display server stopped [+51.53s] DEBUG: Display stopped [+51.53s] DEBUG: Active display stopped, switching to greeter [+51.53s] DEBUG: Switching to greeter [+51.53s] DEBUG: Starting new display for greeter [+51.53s] DEBUG: Starting local X display [+51.53s] DEBUG: Using VT 7 [+51.53s] DEBUG: Logging to /var/log/lightdm/x-0.log [+51.53s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0 [+51.53s] DEBUG: Launching X Server [+51.53s] DEBUG: Launching process 2894: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch [+51.53s] DEBUG: Waiting for ready signal from X server :0 [+51.75s] DEBUG: Got signal 10 from process 2894 [+51.75s] DEBUG: Got signal from X server :0 [+51.75s] DEBUG: Connecting to XServer :0 [+51.75s] DEBUG: Starting greeter [+51.75s] DEBUG: Started session 2898 with service 'lightdm', username 'lightdm' [+51.76s] DEBUG: Session 2898 authentication complete with return value 0: Success [+51.76s] DEBUG: Greeter authorized [+51.76s] DEBUG: Logging to /var/log/lightdm/x-0-greeter.log [+51.76s] DEBUG: Session 2898 running command /usr/lib/lightdm/lightdm-greeter-session /usr/sbin/unity-greeter [+53.26s] DEBUG: Greeter connected version=1.2.1 [+53.26s] DEBUG: Greeter connected, display is ready [+53.26s] DEBUG: New display ready, switching to it [+53.26s] DEBUG: Activating VT 7 [+53.26s] DEBUG: Stopping greeter display being switched from [+54.17s] DEBUG: Greeter start authentication for dmytro [+54.17s] DEBUG: Started session 3152 with service 'lightdm', username 'dmytro' [+54.18s] DEBUG: Session 3152 got 1 message(s) from PAM [+54.18s] DEBUG: Prompt greeter with 1 message(s) [+58.61s] DEBUG: Continue authentication [+58.65s] DEBUG: Session 3152 authentication complete with return value 0: Success [+58.65s] DEBUG: Authenticate result for user dmytro: Success [+58.66s] DEBUG: User dmytro authorized [+58.66s] DEBUG: Greeter requests session ubuntu [+58.66s] DEBUG: Using session ubuntu [+58.66s] DEBUG: Stopping greeter [+58.66s] DEBUG: Session 2898: Sending SIGTERM How can I fix it? What other .log files could possibly give me a clue? Update: Possibly it's duplicate of Desktop login fails, terminal works

    Read the article

  • Code Structure / Level Design: Plants vs Zombies game level dissection

    - by lalan
    Hi Friends, I am interested in learning the class structure of Plants vs Zombies, particularly level design; for those who haven't played it - this video contains nice play-through: http://www.youtube.com/watch?v=89DfdOIJ4xw. How would I go ahead and design the code, mostly structure & classes, which allows for maximum flexibility & clean development? I am familiar with data driven design concepts, and would use events to handle most of dynamic behavior. Dissection at macro level: (Once every Level) Load tilemap, props, etc -- basically build the map (Once every Level) Camera Movement - might consider it as short cut-scene (Once every Level) Show Enemies you'll face during present level (Once every Level) Unit Selection Window/Panel - selection of defensive plants (Once every Level) Camera Movement - might consider it as short cut-scene (Once every Level) HUD Creation - based on unit selection (Level Loop) Enemy creation - based on types of zombies allowed (Level Loop) Sun/Resource generation (Level Loop) Show messages like 'huge wave of zombies coming', 'final wave' (Level Loop) Other unique events - Spawn gifts, money, tombstones, etc (Once every Level) Unlock new plant Potential game scripts: a) Level definitions: Level_1_1.xml, Level_1_2.xml, etc. Level_1_1.xml :: Sample script <map> <tilemap>tilemapFrontLawn</tilemap> <SpawnPoints> tiles where particular type of zombies (land vs water) may spawn</spawnPoints> <props> position, entity array -- lawnmower, </props> </map> <zombies> <... list of zombies who gonna attack by ids...> </zombies> <plants> <... list by plants which are available for defense by ids...> </plants> <progression> <ZombieWave name='first wave' spawnScript='zombieLightWave.lua' unlock='null'> <startMessages time=1.5>Ready</startMessages> <endMessages time=1.5>Huge wave of zombies incoming</endMessages> </ZombieWave> </progression> b) Entities definitions: .xmls containing zombies, plants, sun, lawnmower, coins, etc description. Potential classes: //LevelManager - Based on the level under play, it will load level script. Few of the // functions it may have: class LevelManager { public: bool load(string levelFileName); bool enter(); bool update(float deltatime); bool exit(); private: LevelData* mLevelData; } // LevelData - Contains the details of level loaded by LevelManager. class LevelData { private: string file; // array of camera,dialog,attackwaves, etc in active level LevelCutSceneCamera** mArrayCutSceneCamera; LevelCutSceneDialog** mArrayCutSceneDialog; LevelAttackWave** mArrayAttackWave; .... // which camera,dialog,attackwave is active in level uint mCursorCutSceneCamera; uint mCursorCutSceneDialog; uint mCursorAttackWave; public: // based on cursor, get the next camera,dialog,attackwave,etc in active level // return false/true based on failure/success bool nextCutSceneCamera(LevelCutSceneCamera**); bool nextCutSceneDialog(LevelCutSceneDialog**); } // LevelUnderPlay- LevelManager class LevelUnderPlay { private: LevelCutSceneCamera* mCutSceneCamera; LevelCutSceneDialog* mCutSceneDialog; LevelAttackWave* mAttackWave; Entities** mSelectedPlants; Entities** mAllowedZombies; bool isCutSceneCameraActive; public: bool enter(); bool update(float deltatime); bool exit(); } I am totally confused.. :( Does it make sense of using class composition (have flat class hierarchy) for managing levels. Is it a good idea to just add/remove/update sprites (or any drawable stuff) to current scene from LevelManager or LevelUnderPlay? If I want to make non-linear level design, how should I go ahead? Perhaps I would need a LevelProgression class, which would decide what to do based on decision tree. Any suggestions would be appreciated very much. Thank for your time, lalan

    Read the article

  • 3 Ways to Make Steam Even Faster

    - by Chris Hoffman
    Have you ever noticed how slow Steam’s built-in web browser can be? Do you struggle with slow download speeds? Or is Steam just slow in general? These tips will help you speed it up. Steam isn’t a game itself, so there are no 3D settings to change to achieve maximum performance. But there are some things you can do to speed it up dramatically. Speed Up the Steam Web Browser Steam’s built-in web browser — used in both the Steam store and in Steam’s in-game overlay to provide a web browser you can quickly use within games – can be frustratingly slow on many systems. Rather than the typical speed we’ve come to expect from Chrome, Firefox, or even Internet Explorer, Steam seems to struggle. When you click a link or go to a new page, there’s a noticeable delay before the new page appears — something that doesn’t happen in desktop browsers. Many people seem to have made peace with this slowness, accepting that Steam’s built-in browser is just bad. However, there’s a trick that will eliminate this delay on many systems and make the Steam web browser fast. This problem seems to arise from an incompatibility with the Automatically Detect Proxy Settings option, which is enabled by default on Windows. This is a compatibility option that very few people should actually need, so it’s safe to disable it. To disable this option, open the Internet Options dialog — press the Windows key to access the Start menu or Start screen, type Internet Options, and click the Internet Options shortcut. Select the Connections tab in the Internet Options window and click the LAN settings button. Uncheck the Automatically detect settings option here, then click OK to save your settings. If you experienced a significant delay every time a web page loaded in Steam’s web browser, it should now be gone. In the unlikely event that you encounter some sort of problem with your network connection, you could always re-enable this option. Increase Steam’s Game Download Speed Steam attempts to automatically select the nearest download server to your location. However, it may not always select the ideal download server. Or, in the case of high-traffic events like big seasonal sales and huge game launches, you may benefit from selecting a less-congested server. To do this, open Steam’s settings by clicking the Steam menu in Steam and selecting Settings. Click over to the Downloads tab and select the closest download server from the Download Region box. You should also ensure that Steam’s download bandwidth isn’t limited from here. You may want to restart Steam and see if your download speeds improve after changing this setting. In some cases, the closest server might not be the fastest. One a bit farther away could be faster if your local server is more congested, for example. Steam once provided information about content server load, which allowed you to select a regional server that wasn’t under high-load, but this information no longer seems to be available. Steam still provides a page that shows you the amount of download activity happening in different regions, including statistics about the difference in download speeds in different US states, but this information isn’t as useful. Accelerate Steam and Your Games One way to speed up all your games — and Steam itself —  is by getting a solid-state drive and installing Steam to it. Steam allows you to easily move your Steam folder — at C:\Program Files (x86)\Steam by default — to another hard drive. Just move it like you would any other folder. You can then launch the Steam.exe program as if you had never moved Steam’s files. Steam also allows you to configure multiple game library folders. This means that you can set up a Steam library folder on a solid-state drive and one on your larger magnetic hard drive. Install your most frequently played games to the solid-state drive for maximum speed and your less frequently played ones to the slower magnetic hard drive to save SSD space. To set up additional library folders, open Steam’s Settings window and click the Downloads tab. You’ll find the Steam Library Folders option here. Click the Add Library Folder button and create a new game library on another hard drive. When you install a game in Steam, you’ll be asked which library folder you want to install it to. With the proxy compatibility option disabled, the correct download server chosen, and Steam installed to a fast SSD, it should be a speed demon. There’s not much more you can do to speed up Steam, short of upgrading other hardware like your computer’s CPU. Image Credit: Andrew Nash on Flickr     

    Read the article

  • Windows 7 doesn't boot after Ubuntu install

    - by Omu
    I had windows 7 installed on my pc, then I installed Ubuntu 10.10/ During the installation process I have chosen to manually set my partitions: I set a 10GB drive for ubuntu root 1GB drive for swap and for boot drive I've chosen the one used by windows 7 Now I can boot ubuntu, I have the windows 7 option in the boot list, but when I choose Windows 7, it shows me a black screen for a second and returns back to boot screen. Boot Info Script 0.55 dated February 15th, 2010 ============================= Boot Info Summary: ============================== = Windows is installed in the MBR of /dev/sda sda1: _________________________________________________________________________ File system: ntfs Boot sector type: Grub 2 Boot sector info: Grub 2 is installed in the boot sector of sda1 and looks at sector 304908237 of the same hard drive for core.img, but core.img can not be found at this location. No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files/dirs: /bootmgr /Boot/BCD /Windows/System32/winload.exe sda2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows XP Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: sda3: _________________________________________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: _________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 10.10 Boot files/dirs: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sda4: _________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: =========================== Drive/Partition Info: ============================= Drive: sda ___________________ _____________________________________________________ Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start End Size Id System /dev/sda1 * 63 62,894,474 62,894,412 7 HPFS/NTFS /dev/sda2 62,894,478 291,579,749 228,685,272 7 HPFS/NTFS /dev/sda3 291,579,811 309,157,937 17,578,127 5 Extended /dev/sda5 291,579,813 309,157,937 17,578,125 83 Linux /dev/sda4 309,159,936 312,580,095 3,420,160 82 Linux swap / Solaris blkid -c /dev/null: ____________________________________________________________ Device UUID TYPE LABEL /dev/sda1 1266BB2766BB0A8D ntfs /dev/sda2 BEDBF1147C76F703 ntfs DATA /dev/sda3: PTTYPE="dos" /dev/sda4 dd38226d-c7c9-4ae5-a726-6d18d34a22e4 swap /dev/sda5 e1dafd1c-f855-406b-8f9a-f9d527c70255 ext4 /dev/sda: PTTYPE="dos" ============================ "mount | grep ^/dev output: =========================== Device Mount_Point Type Options /dev/sda5 / ext4 (rw,errors=remount-ro,commit=0) =========================== sda5/boot/grub/grub.cfg: =========================== # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga } insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set e1dafd1c-f855-406b-8f9a-f9d527c70255 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=640x480 load_video insmod gfxterm fi terminal_output gfxterm insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set e1dafd1c-f855-406b-8f9a-f9d527c70255 set locale_dir=($root)/boot/grub/locale set lang=en insmod gettext if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### menuentry 'Ubuntu, with Linux 2.6.35-22-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set e1dafd1c-f855-406b-8f9a-f9d527c70255 linux /boot/vmlinuz-2.6.35-22-generic root=UUID=e1dafd1c-f855-406b-8f9a-f9d527c70255 ro quiet splash initrd /boot/initrd.img-2.6.35-22-generic } menuentry 'Ubuntu, with Linux 2.6.35-22-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set e1dafd1c-f855-406b-8f9a-f9d527c70255 echo 'Loading Linux 2.6.35-22-generic ...' linux /boot/vmlinuz-2.6.35-22-generic root=UUID=e1dafd1c-f855-406b-8f9a-f9d527c70255 ro single echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.35-22-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set e1dafd1c-f855-406b-8f9a-f9d527c70255 linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set e1dafd1c-f855-406b-8f9a-f9d527c70255 linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows 7 (loader) (on /dev/sda1)" { insmod part_msdos insmod ntfs set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set 1266bb2766bb0a8d chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### =============================== sda5/etc/fstab: =============================== # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # proc /proc proc nodev,noexec,nosuid 0 0 /dev/sda5 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda4 during installation UUID=dd38226d-c7c9-4ae5-a726-6d18d34a22e4 none swap sw 0 0 =================== sda5: Location of files loaded by Grub: =================== 156.1GB: boot/grub/core.img 156.3GB: boot/grub/grub.cfg 149.9GB: boot/initrd.img-2.6.35-22-generic 156.3GB: boot/vmlinuz-2.6.35-22-generic 149.9GB: initrd.img 156.3GB: vmlinuz

    Read the article

  • Cant get lm-sensors to load ATI Radeon temp or fan

    - by woody
    New to Linux and having minor issues :/ . I followed this guide initially but did not recieve the proper output and did not show my ATI Radeon HD 5000 temp or fan speed. Then used this guide, same problems exhibited. No issues installing and no errors. I think its not reading i2c for some reason. The proprietary driver is installed and functioning correctly according fglrxinfo. I can use aticonfig commands and view both temp and fan. Any ideas on how to get it working under 'sensors'? When i run 'sudo sensors-detect' this is my ouput # sensors-detect revision 5984 (2011-07-10 21:22:53 +0200) # System: LENOVO IdeaPad Y560 (laptop) # Board: Lenovo KL3 This program will help you determine which kernel modules you need to load to use lm_sensors most effectively. It is generally safe and recommended to accept the default answers to all questions, unless you know what you're doing. Some south bridges, CPUs or memory controllers contain embedded sensors. Do you want to scan for them? This is totally safe. (YES/no): y Silicon Integrated Systems SIS5595... No VIA VT82C686 Integrated Sensors... No VIA VT8231 Integrated Sensors... No AMD K8 thermal sensors... No AMD Family 10h thermal sensors... No AMD Family 11h thermal sensors... No AMD Family 12h and 14h thermal sensors... No AMD Family 15h thermal sensors... No AMD Family 15h power sensors... No Intel digital thermal sensor... Success! (driver `coretemp') Intel AMB FB-DIMM thermal sensor... No VIA C7 thermal sensor... No VIA Nano thermal sensor... No Some Super I/O chips contain embedded sensors. We have to write to standard I/O ports to probe them. This is usually safe. Do you want to scan for Super I/O sensors? (YES/no): y Probing for Super-I/O at 0x2e/0x2f Trying family `National Semiconductor/ITE'... Yes Found unknown chip with ID 0x8502 Probing for Super-I/O at 0x4e/0x4f Trying family `National Semiconductor/ITE'... No Trying family `SMSC'... No Trying family `VIA/Winbond/Nuvoton/Fintek'... No Trying family `ITE'... No Some hardware monitoring chips are accessible through the ISA I/O ports. We have to write to arbitrary I/O ports to probe them. This is usually safe though. Yes, you do have ISA I/O ports even if you do not have any ISA slots! Do you want to scan the ISA I/O ports? (YES/no): y Probing for `National Semiconductor LM78' at 0x290... No Probing for `National Semiconductor LM79' at 0x290... No Probing for `Winbond W83781D' at 0x290... No Probing for `Winbond W83782D' at 0x290... No Lastly, we can probe the I2C/SMBus adapters for connected hardware monitoring devices. This is the most risky part, and while it works reasonably well on most systems, it has been reported to cause trouble on some systems. Do you want to probe the I2C/SMBus adapters now? (YES/no): y Using driver `i2c-i801' for device 0000:00:1f.3: Intel 3400/5 Series (PCH) Now follows a summary of the probes I have just done. Just press ENTER to continue: Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9) To load everything that is needed, add this to /etc/modules: #----cut here---- # Chip drivers coretemp #----cut here---- If you have some drivers built into your kernel, the list above will contain too many modules. Skip the appropriate ones! Do you want to add these lines automatically to /etc/modules? (yes/NO) My output for 'sensors' is: acpitz-virtual-0 Adapter: Virtual device temp1: +58.0°C (crit = +100.0°C) coretemp-isa-0000 Adapter: ISA adapter Core 0: +56.0°C (high = +84.0°C, crit = +100.0°C) Core 1: +57.0°C (high = +84.0°C, crit = +100.0°C) Core 2: +58.0°C (high = +84.0°C, crit = +100.0°C) Core 3: +57.0°C (high = +84.0°C, crit = +100.0°C) and my '/etc/modules' is: # /etc/modules: kernel modules to load at boot time. # # This file contains the names of kernel modules that should be loaded # at boot time, one per line. Lines beginning with "#" are ignored. lp rtc # Generated by sensors-detect on Fri Nov 30 23:24:31 2012 # Chip drivers coretemp

    Read the article

  • Driver error when using multiple shaders

    - by Jinxi
    I'm using 3 different shaders: a tessellation shader to use the tessellation feature of DirectX11 :) a regular shader to show how it would look without tessellation and a text shader to display debug-info such as FPS, model count etc. All of these shaders are initialized at the beginning. Using the keyboard, I can switch between the tessellation shader and regular shader to render the scene. Additionally, I also want to be able toggle the display of debug-info using the text shader. Since implementing the tessellation shader the text shader doesn't work anymore. When I activate the DebugText (rendered using the text-shader) my screens go black for a while, and Windows displays the following message: Display Driver stopped responding and has recovered This happens with either of the two shaders used to render the scene. Additionally: I can start the application using the regular shader to render the scene and then switch to the tessellation shader. If I try to switch back to the regular shader I get the same error as with the text shader. What am I doing wrong when switching between shaders? What am I doing wrong when displaying text at the same time? What file can I post to help you help me? :) thx P.S. I already checked if my keyinputs interrupt at the wrong time (during render or so..), but that seems to be ok Testing Procedure Regular Shader without text shader Add text shader to Regular Shader by keyinput (works now, I built the text shader back to only vertex and pixel shader) (somthing with the z buffer is stil wrong...) Remove text shader, then change shader to Tessellation Shader by key input Then if I add the Text Shader or switch back to the Regular Shader Switching/Render Shader Here the code snipet from the Renderer.cpp where I choose the Shader according to the boolean "m_useTessellationShader": if(m_useTessellationShader) { // Render the model using the tesselation shader ecResult = m_ShaderManager->renderTessellationShader(m_D3D->getDeviceContext(), meshes[lod_level]->getIndexCount(), worldMatrix, viewMatrix, projectionMatrix, textures, texturecount, m_Light->getDirection(), m_Light->getAmbientColor(), m_Light->getDiffuseColor(), (D3DXVECTOR3)m_Camera->getPosition(), TESSELLATION_AMOUNT); } else { // todo: loaded model depends on distance to camera // Render the model using the light shader. ecResult = m_ShaderManager->renderShader(m_D3D->getDeviceContext(), meshes[lod_level]->getIndexCount(), lod_level, textures, texturecount, m_Light->getDirection(), m_Light->getAmbientColor(), m_Light->getDiffuseColor(), worldMatrix, viewMatrix, projectionMatrix); } And here the code snipet from the Mesh.cpp where I choose the Typology according to the boolean "useTessellationShader": // RenderBuffers is called from the Render function. The purpose of this function is to set the vertex buffer and index buffer as active on the input assembler in the GPU. Once the GPU has an active vertex buffer it can then use the shader to render that buffer. void Mesh::renderBuffers(ID3D11DeviceContext* deviceContext, bool useTessellationShader) { unsigned int stride; unsigned int offset; // Set vertex buffer stride and offset. stride = sizeof(VertexType); offset = 0; // Set the vertex buffer to active in the input assembler so it can be rendered. deviceContext->IASetVertexBuffers(0, 1, &m_vertexBuffer, &stride, &offset); // Set the index buffer to active in the input assembler so it can be rendered. deviceContext->IASetIndexBuffer(m_indexBuffer, DXGI_FORMAT_R32_UINT, 0); // Check which Shader is used to set the appropriate Topology // Set the type of primitive that should be rendered from this vertex buffer, in this case triangles. if(useTessellationShader) { deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_3_CONTROL_POINT_PATCHLIST); }else{ deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); } return; } RenderShader Could there be a problem using sometimes only vertex and pixel shader and after switching using vertex, hull, domain and pixel shader? Here a little overview of my architecture: TextClass: uses font.vs and font.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); RegularShader: uses vertex.vs and pixel.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); TessellationShader: uses tessellation.vs, tessellation.hs, tessellation.ds, tessellation.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-HSSetShader(m_hullShader, NULL, 0); deviceContext-DSSetShader(m_domainShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); ClearState I'd like to switch between 2 shaders and it seems they have different context parameters, right? In clearstate methode it says it resets following params to NULL: I found following in my Direct3D Class: depth-stencil state - m_deviceContext-OMSetDepthStencilState rasterizer state - m_deviceContext-RSSetState(m_rasterState); blend state - m_device-CreateBlendState viewports - m_deviceContext-RSSetViewports(1, &viewport); I found following in every Shader Class: input/output resource slots - deviceContext-PSSetShaderResources shaders - deviceContext-VSSetShader to - deviceContext-PSSetShader input layouts - device-CreateInputLayout sampler state - device-CreateSamplerState These two I didn't understand, where can I find them? predications - ? scissor rectangles - ? Do I need to store them all localy so I can switch between them, because it doesn't feel right to reinitialize the Direct3d and the Shaders by every switch (key input)?!

    Read the article

  • Is Linear Tape File System (LTFS) Best For Transportable Storage?

    - by rickramsey
    Those of us in tape storage engineering take a lot of pride in what we do, but understand that tape is the right answer to a storage problem only some of the time. And, unfortunately for a storage medium with such a long history, it has built up a few preconceived notions that are no longer valid. When I hear customers debate whether to implement tape vs. disk, one of the common strikes against tape is its perceived lack of usability. If you could go back a few generations of corporate acquisitions, you would discover that StorageTek engineers recognized this problem and started developing a solution where a tape drive could look just like a memory stick to a user. The goal was to not have to care about where files were on the cartridge, but to simply see the list of files that were on the tape, and click on them to open them up. Eventually, our friends in tape over at IBM built upon our work at StorageTek and Sun Microsystems and released the Linear Tape File System (LTFS) feature for the current LTO5 generation of tape drives as an open specification. LTFS is really a wonderful feature and we’re proud to have taken part in its beginnings and, as you’ll soon read, its future. Today we offer LTFS-Open Edition, which is free for you to use in your in Oracle Enterprise Linux 5.5 environment - not only on your LTO5 drives, but also on your Oracle StorageTek T10000C drives. You can download it free from Oracle and try it out. LTFS does exactly what its forefathers imagined. Now you can see immediately which files are on a cartridge. LTFS does this by splitting a cartridge into two partitions. The first holds all of the necessary metadata to create a directory structure for you to easily view the contents of the cartridge. The second partition holds all of the files themselves. When tape media is loaded onto a drive, a complete file system image is presented to the user. Adding files to a cartridge can be as simple as a drag-and-drop just as you do today on your laptop when transferring files from your hard drive to a thumb drive or with standard POSIX file operations. You may be thinking all of this sounds nice, but asking, “when will I actually use it?” As I mentioned at the beginning, tape is not the right solution all of the time. However, if you ever need to physically move data between locations, tape storage with LTFS should be your most cost-effective and reliable answer. I will give you a few use cases examples of when LTFS can be utilized. Media and Entertainment (M&E), Oil and Gas (O&G), and other industries have a strong need for their storage to be transportable. For example, an O&G company hunting for new oil deposits in remote locations takes very large underground seismic images which need to be shipped back to a central data center. M&E operations conduct similar activities when shooting video for productions. M&E companies also often transfers files to third-parties for editing and other activities. These companies have three highly flawed options for transporting data: electronic transfer, disk storage transport, or tape storage transport. The first option, electronic transfer, is impractical because of the expense of the bandwidth required to transfer multi-terabyte files reliably and efficiently. If there’s one place that has bandwidth, it’s your local post office so many companies revert to physically shipping storage media. Typically, M&E companies rely on transporting disk storage between sites even though it, too, is expensive. Tape storage should be the preferred format because as IDC points out, “Tape is more suitable for physical transportation of large amounts of data as it is less vulnerable to mechanical damage during transportation compared with disk" (See note 1, below). However, tape storage has not been used in the past because of the restrictions created by proprietary formats. A tape may only be readable if both the sender and receiver have the same proprietary application used to write the file. In addition, the workflows may be slowed by the need to read the entire tape cartridge during recall. LTFS solves both of these problems, clearing the way for tape to become the standard platform for transferring large files. LTFS is open and, as long as you’ve downloaded the free reader from our website or that of anyone in the LTO consortium, you can read the data. So if a movie studio ships a scene to a third-party partner to add, for example, sounds effects or a music score, it doesn’t have to care what technology the third-party has. If it’s written back to an LTFS-formatted tape cartridge, it can be read. Some tape vendors like to claim LTFS is a “standard,” but beauty is in the eye of the beholder. It’s a specification at this point, not a standard. That said, we’re already seeing application vendors create functionality to write in an LTFS format based on the specification. And it’s my belief that both customers and the tape storage industry will see the most benefit if we all follow the same path. As such, we have volunteered to lead the way in making LTFS a standard first with the Storage Network Industry Association (SNIA), and eventually through to standard bodies such as American National Standards Institute (ANSI). Expect to hear good news soon about our efforts. So, if storage transportability is one of your requirements, I recommend giving LTFS a look. It makes tape much more user-friendly and it’s free, which allows tape to maintain all of its cost advantages over disk! Note 1 - IDC Report. April, 2011. “IDC’s Archival Storage Solutions Taxonomy, 2011” - Brian Zents Website Newsletter Facebook Twitter

    Read the article

  • .NET Security Part 3

    - by Simon Cooper
    You write a security-related application that allows addins to be used. These addins (as dlls) can be downloaded from anywhere, and, if allowed to run full-trust, could open a security hole in your application. So you want to restrict what the addin dlls can do, using a sandboxed appdomain, as explained in my previous posts. But there needs to be an interaction between the code running in the sandbox and the code that created the sandbox, so the sandboxed code can control or react to things that happen in the controlling application. Sandboxed code needs to be able to call code outside the sandbox. Now, there are various methods of allowing cross-appdomain calls, the two main ones being .NET Remoting with MarshalByRefObject, and WCF named pipes. I’m not going to cover the details of setting up such mechanisms here, or which you should choose for your specific situation; there are plenty of blogs and tutorials covering such issues elsewhere. What I’m going to concentrate on here is the more general problem of running fully-trusted code within a sandbox, which is required in most methods of app-domain communication and control. Defining assemblies as fully-trusted In my last post, I mentioned that when you create a sandboxed appdomain, you can pass in a list of assembly strongnames that run as full-trust within the appdomain: // get the Assembly object for the assembly Assembly assemblyWithApi = ... // get the StrongName from the assembly's collection of evidence StrongName apiStrongName = assemblyWithApi.Evidence.GetHostEvidence<StrongName>(); // create the sandbox AppDomain sandbox = AppDomain.CreateDomain( "Sandbox", null, appDomainSetup, restrictedPerms, apiStrongName); Any assembly that is loaded into the sandbox with a strong name the same as one in the list of full-trust strong names is unconditionally given full-trust permissions within the sandbox, irregardless of permissions and sandbox setup. This is very powerful! You should only use this for assemblies that you trust as much as the code creating the sandbox. So now you have a class that you want the sandboxed code to call: // within assemblyWithApi public class MyApi { public static void MethodToDoThings() { ... } } // within the sandboxed dll public class UntrustedSandboxedClass { public void DodgyMethod() { ... MyApi.MethodToDoThings(); ... } } However, if you try to do this, you get quite an ugly exception: MethodAccessException: Attempt by security transparent method ‘UntrustedSandboxedClass.DodgyMethod()’ to access security critical method ‘MyApi.MethodToDoThings()’ failed. Security transparency, which I covered in my first post in the series, has entered the picture. Partially-trusted code runs at the Transparent security level, fully-trusted code runs at the Critical security level, and Transparent code cannot under any circumstances call Critical code. Security transparency and AllowPartiallyTrustedCallersAttribute So the solution is easy, right? Make MethodToDoThings SafeCritical, then the transparent code running in the sandbox can call the api: [SecuritySafeCritical] public static void MethodToDoThings() { ... } However, this doesn’t solve the problem. When you try again, exactly the same exception is thrown; MethodToDoThings is still running as Critical code. What’s going on? By default, a fully-trusted assembly always runs Critical code, irregardless of any security attributes on its types and methods. This is because it may not have been designed in a secure way when called from transparent code – as we’ll see in the next post, it is easy to open a security hole despite all the security protections .NET 4 offers. When exposing an assembly to be called from partially-trusted code, the entire assembly needs a security audit to decide what should be transparent, safe critical, or critical, and close any potential security holes. This is where AllowPartiallyTrustedCallersAttribute (APTCA) comes in. Without this attribute, fully-trusted assemblies run Critical code, and partially-trusted assemblies run Transparent code. When this attribute is applied to an assembly, it confirms that the assembly has had a full security audit, and it is safe to be called from untrusted code. All code in that assembly runs as Transparent, but SecurityCriticalAttribute and SecuritySafeCriticalAttribute can be applied to individual types and methods to make those run at the Critical or SafeCritical levels, with all the restrictions that entails. So, to allow the sandboxed assembly to call the full-trust API assembly, simply add APCTA to the API assembly: [assembly: AllowPartiallyTrustedCallers] and everything works as you expect. The sandboxed dll can call your API dll, and from there communicate with the rest of the application. Conclusion That’s the basics of running a full-trust assembly in a sandboxed appdomain, and allowing a sandboxed assembly to access it. The key is AllowPartiallyTrustedCallersAttribute, which is what lets partially-trusted code call a fully-trusted assembly. However, an assembly with APTCA applied to it means that you have run a full security audit of every type and member in the assembly. If you don’t, then you could inadvertently open a security hole. I’ll be looking at ways this can happen in my next post.

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

  • Tip #13 java.io.File Surprises

    - by ByronNevins
    There is an assumption that I've seen in code many times that is totally wrong.  And this assumption can easily bite you.  The assumption is: File.getAbsolutePath and getAbsoluteFile return paths that are not relative.  Not true!  Sort of.  At least not in the way many people would assume.  All they do is make sure that the beginning of the path is absolute.  The rest of the path can be loaded with relative path elements.  What do you think the following code will print? public class Main {    public static void main(String[] args) {        try {            File f = new File("/temp/../temp/../temp/../");            File abs  = f.getAbsoluteFile();            File parent = abs.getParentFile();            System.out.println("Exists: " + f.exists());            System.out.println("Absolute Path: " + abs);            System.out.println("FileName: " + abs.getName());            System.out.printf("The Parent Directory of %s is %s\n", abs, parent);            System.out.printf("The CANONICAL Parent Directory of CANONICAL %s is %s\n",                        abs, abs.getCanonicalFile().getParent());            System.out.printf("The CANONICAL Parent Directory of ABSOLUTE %s is %s\n",                        abs, parent.getCanonicalFile());            System.out.println("Canonical Path: " + f.getCanonicalPath());        }        catch (IOException ex) {            System.out.println("Got an exception: " + ex);        }    }} Output: Exists: trueAbsolute Path: D:\temp\..\temp\..\temp\..FileName: ..The Parent Directory of D:\temp\..\temp\..\temp\.. is D:\temp\..\temp\..\tempThe CANONICAL Parent Directory of CANONICAL D:\temp\..\temp\..\temp\.. is nullThe CANONICAL Parent Directory of ABSOLUTE D:\temp\..\temp\..\temp\.. is D:\tempCanonical Path: D:\ Notice how it says that the parent of d:\ is d:\temp !!!The file, f, is really the root directory.  The parent is supposed to be null. I learned about this the hard way! getParentXXX simply hacks off the final item in the path. You can get totally unexpected results like the above. Easily. I filed a bug on this behavior a few years ago[1].   Recommendations: (1) Use getCanonical instead of getAbsolute.  There is a 1:1 mapping of files and canonical filenames.  I.e each file has one and only one canonical filename and it will definitely not have relative path elements in it.  There are an infinite number of absolute paths for each file. (2) To get the parent file for File f do the following instead of getParentFile: File parent = new File(f, ".."); [1] http://bt2ws.central.sun.com/CrPrint?id=6687287

    Read the article

  • How to UEFI install Ubuntu 12.10?

    - by Geezanansa
    Running a newer FM1 motherboard which is using an AMD 3870k APU with a new 1TB HDD. Following the advice in the motherboard manual and https://help.ubuntu.com/community/UEFI have now got to grub option screen for UEFI install. see http://imgur.com/VW5vz The dvd.iso being used is Ubuntu 12.10 desktop amd64 from ubuntu .com. The hdd has had a gpt partition table made for, by using gparted when in a live desktop session when booted in bios mode. (*edit/update: Although the old cd updates on running it is an old kernel and it did make a gpt but that version of gparted uses fdisk whereas gdisk is required to make gpt. Think am going to have to spend more time here http://www.dedoimedo.com/computers/gparted.html lol Using the gparted from 12.10 live session to make partitions; following the guidance regarding this at https://help.ubuntu.com/community/UEFI#Creating_an_EFI_partition, but can only boot to grub option screen http://imgur.com/VW5vz when 12.10 options to "try ubuntu" or "install ubuntu" are selected they give errors as described below*) but after making the gpt decided to leave it unformatted/unallocated space with the intention of using installer to set up partitions. update-originally but gparted now sees hdd as http://imgur.com/hFIvm as described above. *Booting live dvd in EFI mode gives "Secure Boot not installed" just before grub kernel option list with the option to "install ubuntu" but get "can not read cd/0" and "the kernel must be loaded first" errors; when that option is selected. Any pointers on how to get installer going for UEFI install would be good. Thanks in advance. update: Hopefully these screenshots can help better highlight where i am going wrong or if there is something else going on http://imgur.com/g30RB, http://imgur.com/VW5vz, http://imgur.com/31E0q, http://imgur.com/bnuaG, http://imgur.com/y4KGu, http://imgur.com/3u2QE, http://imgur.com/n9lN3, http://imgur.com/FEKvz, http://imgur.com/hFIvm, update: Thank you fernando garcia for pointing me in the right direction to start the process of elimantion. What i have done since asking question is a little home work starting here http://askubuntu.com/faq#bounty and here http://askubuntu.com/questions/how-to-ask. Looking at other similar questions was good fun and found this 12.10 UEFI Secure Boot install the most relative in helping getting ubuntu to uefi install on my system. In response to wolverine's question this article was referred to http://web.dodds.net/~vorlon/wiki/blog/SecureBoot_in_Ubuntu_12.10/ This article in the first sentence gives a link to http://www.ubuntu.com/download which is where i downloaded the 12.10 desktop amd64 .iso(and others) but have been unable to do a efi install of ubuntu on this system and as this is a new system have ended up just going with bios installer running which at least puts my mind at ease that i have not bricked my new mobo.(had to do a clrcmos and flash to latest bios version) So it possibly could be the bios settings or the bios version being used that is problem. To try and eliminate bios version i can not get to post screen in order to id bios version being used. Pressing tab to show post instead of logo and trying to pausebreak to catch post is proving difficult. If logo screen in bios is disabled just get black screen no post shown and pressing tab does not show post. Appreciate using appropriate bios settings and latest 12.10 release should simply get uefi installer running when selected from the grub list (nice graphic details in Identifying if computer boots the cd in efi mode section at https://help.ubuntu.com/community/UEFI#Identifying_if_the_computer_boots_the_CD_in_EFI_mode) And to confirm the hdd is booting in efi mode https://help.ubuntu.com/community/UEFI#Identifying_if_the_computer_boots_the_HDD_in_EFI_mode running the command [ -d /sys/firmware/efi ] && echo "EFI boot on HDD" || echo "Legacy boot on HDD" gave Legacy boot on HDD This is as expected because i allowed the bios installer (which was 12.04 desktop amd64 after trying 12.10 desktop amd64 in efi mode) to run to get a working installation. Which is not what was intended or wished for but wanted to get a working os to bench test new mobo i.e. prove it is working. There are other options as in installing other bootmanagers/loaders but do not wish to do so as shim should get grub2 going that is after secure boot has been signed.(Now got rough idea what should happen just it aint happening. Is it possible ahci drivers are required?) Will post boot info script url of the updated config/setup. The original question asked seems irrelevant to what is being said in this update but as the problem is not resolved will keep on trying efi installing! i.e the problem is same as when question asked just trying to update. Have tried to edit and update the best i can!

    Read the article

  • MySQL for Excel 1.1.0 GA has been released

    - by Javier Treviño
    The MySQL Windows Experience Team is proud to announce the release of MySQL for Excel version 1.1.0 GA, one of our newest products contained in the MySQL Installer suite. You can download it from our official Downloads page at http://dev.mysql.com/downloads/installer/. The 1.1.0 release of MySQL for Excel introduces the following features: Edit MySQL Data. Edit MySQL Data This may be the coolest feature so far; users will be able to edit the data in a MySQL table using MS Excel in a very friendly and intuitive way.  Edit Data supports inserting new rows, deleting existing rows and updating existing data as easy as playing with data in an Excel’s spreadsheet and pushing changes back to the server.  Also this version contains the following bug fixes: Enabled the following checkboxes in the Append Data's Advanced Options dialog and added code in the Append Data dialog to use the checkboxes as follows: Automatically store the column mapping for the given table     If checked the current mapping will be stored automatically after clicking the Append button if the append operation is successful and there is no mapping for the current connection.schema.table already; the new mapping is stored with a proposed name of Mapping. Reload stored column mapping for the selected table automatically     If checked the first Stored Mapping found where all column names in the source grid match all column names in the target grid is automatically selected and applied when the Append Data dialog is loaded. Fixed code in Append Data that applies a stored column mapping to skip target columns where the associated mapping is empty (saved as a -1). Enclosed the Add-In's startup code in a try-catch block in order to log any possible error thrown during startup; and added information messages to the log at the beginning of the Add-In's startup code and at the end of the shutdown code.  Also changed the wrapper method that calls the MySQLUtility to write messages to the log to make logging easier, thus changed the log call throughout all the code that contains a try-catch block. Added code to the main wix configuration file to check if a newer version is already installed and if so abort the installation Fixed code to refresh the Import Procedure Form's preview grid's data source to repaint its contents every time the Call button is pressed. Added code to re-pull connections after connections are migrated from Excel to Workbench. Fixed code so when the Append Data's Automatic Mapping is performed any subsequent change on a mapping resets the mapping to a Manual Mapping. Added code to the InfoDialog class to set the button text to "Show Details" or "Hide Details" depending on the status of the Details text container. Fixed a GUID in the main wix configuration file so now previous versions are uninstalled during a new installation. Added an option to the Export Data's Advanced Options dialog to remove columns with no data, by default the Export Dialog will only flag those columns as Excluded. Added code to display a warning and paint a column red if the column name in the Export Data dialog is not set, display a warning if the table name is not set, and stack warnings but not display them if a column is Excluded, warnings are displayed normally for columns if they are not Excluded anymore.  Added code to prevent the Append and Export of Data if more than 1 selection is made (selecting more than 1 area holding the Ctrl key while selecting Excel cells). Fixed problem that prevented MySQL for Excel from loading when Display settings in Windows 7 is set to Adjust to Best Performance (Oracle bug 14521405 - UNHANDLED EXCEPTION IS THROWN WHEN LOADING MYSQL FOR EXCEL). Fixed code that renames the auto-generated Primary Key column when the Table name changes since it was not detecting if a column with the same name already existed in the table. The column duplication was not actually happening, it looked that way because the automatically generated PK column was not detecting a column had that same name. Fixed code in Export Data dialog to always set an empty string instead of null to the MySQLDataColumn properties that stores MySQL data types (MySQLDataType, RowsFrom1stDataType and RowsFrom2ndDataType). Added code to display a warning and color red a column which Data Type has not been set by the user or has been manually cleared. Added code to output to the application log exception messages consistently in all places where exceptions are catched. A series of blog posts explaining the new Edit MySQL Data feature and the other existing features are coming in this blog. You can access the MySQL for Excel documentation at http://dev.mysql.com/doc/refman/5.5/en/mysql-for-excel.html You can also post questions on our MySQL for Excel forum found at http://forums.mysql.com/. You can also post questions on our MySQL for Excel forum found at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • The clock hands of the buffer cache

    - by Tony Davis
    Over a leisurely beer at our local pub, the Waggon and Horses, Phil Factor was holding forth on the esoteric, but strangely poetic, language of SQL Server internals, riddled as it is with 'sleeping threads', 'stolen pages', and 'memory sweeps'. Generally, I remain immune to any twinge of interest in the bowels of SQL Server, reasoning that there are certain things that I don't and shouldn't need to know about SQL Server in order to use it successfully. Suddenly, however, my attention was grabbed by his mention of the 'clock hands of the buffer cache'. Back at the office, I succumbed to a moment of weakness and opened up Google. He wasn't lying. SQL Server maintains various memory buffers, or caches. For example, the plan cache stores recently-used execution plans. The data cache in the buffer pool stores frequently-used pages, ensuring that they may be read from memory rather than via expensive physical disk reads. These memory stores are classic LRU (Least Recently Updated) buffers, meaning that, for example, the least frequently used pages in the data cache become candidates for eviction (after first writing the page to disk if it has changed since being read into the cache). SQL Server clearly needs some mechanism to track which pages are candidates for being cleared out of a given cache, when it is getting too large, and it is this mechanism that is somewhat more labyrinthine than I previously imagined. Each page that is loaded into the cache has a counter, a miniature "wristwatch", which records how recently it was last used. This wristwatch gets reset to "present time", each time a page gets updated and then as the page 'ages' it clicks down towards zero, at which point the page can be removed from the cache. But what is SQL Server is suffering memory pressure and urgently needs to free up more space than is represented by zero-counter pages (or plans etc.)? This is where our 'clock hands' come in. Each cache has associated with it a "memory clock". Like most conventional clocks, it has two hands; one "external" clock hand, and one "internal". Slava Oks is very particular in stressing that these names have "nothing to do with the equivalent types of memory pressure". He's right, but the names do, in that peculiar Microsoft tradition, seem designed to confuse. The hands do relate to memory pressure; the cache "eviction policy" is determined by both global and local memory pressures on SQL Server. The "external" clock hand responds to global memory pressure, in other words pressure on SQL Server to reduce the size of its memory caches as a whole. Global memory pressure – which just to confuse things further seems sometimes to be referred to as physical memory pressure – can be either external (from the OS) or internal (from the process itself, e.g. due to limited virtual address space). The internal clock hand responds to local memory pressure, in other words the need to reduce the size of a single, specific cache. So, for example, if a particular cache, such as the plan cache, reaches a defined "pressure limit" the internal clock hand will start to turn and a memory sweep will be performed on that cache in order to remove plans from the memory store. During each sweep of the hands, the usage counter on the cache entry is reduced in value, effectively moving its "last used" time to further in the past (in effect, setting back the wrist watch on the page a couple of hours) and increasing the likelihood that it can be aged out of the cache. There is even a special Dynamic Management View, sys.dm_os_memory_cache_clock_hands, which allows you to interrogate the passage of the clock hands. Frequently turning hands equates to excessive memory pressure, which will lead to performance problems. Two hours later, I emerged from this rather frightening journey into the heart of SQL Server memory management, fascinated but still unsure if I'd learned anything that I'd put to any practical use. However, I certainly began to agree that there is something almost Tolkeinian in the language of the deep recesses of SQL Server. Cheers, Tony.

    Read the article

  • Some notes on Reflector 7

    - by CliveT
    Both Bart and I have blogged about some of the changes that we (and other members of the team) have made to .NET Reflector for version 7, including the new tabbed browsing model, the inclusion of Jason Haley's PowerCommands add-in and some improvements to decompilation such as handling iterator blocks. The intention of this blog post is to cover all of the main new features in one place, and to describe the three new editions of .NET Reflector 7. If you'd simply like to try out the latest version of the beta for yourself you can do so here. Three new editions .NET Reflector 7 will come in three new editions: .NET Reflector .NET Reflector VS .NET Reflector VSPro The first edition is just the standalone Windows application. The latter two editions include the Windows application, but also add the power of Reflector into Visual Studio so that you can save time switching tools and quickly get to the bottom of a debugging issue that involves third-party code. Let's take a look at some of the new features in each edition. Tabbed browsing .NET Reflector now has a tabbed browsing model, in which the individual tabs have independent histories. You can open a new tab to view the selected object by using CTRL+CLICK. I've found this really useful when I'm investigating a particular piece of code but then want to focus on some other methods that I find along the way. For version 7, we wanted to implement the basic idea of tabs to see whether it is something that users will find helpful. If it is something that enhances productivity, we will add more tab-based features in a future version. PowerCommands add-in We have also included Jason Haley's PowerCommands add-in as part of version 7. This add-in provides a number of useful commands, including support for opening .xap files and extracting the constituent assemblies, and a query editor that allows C# queries to be written and executed against the Reflector object model . All of the PowerCommands features can be turned on from the options menu. We will be really interested to see what people are finding useful for further integration into the main tool in the future. My personal favourite part of the PowerCommands add-in is the query editor. You can set up as many of your own queries as you like, but we provide 25 to get you started. These do useful things like listing all extension methods in a given assembly, and displaying other lower-level information, such as the number of times that a given method uses the box IL instruction. These queries can be extracted and then executed from the 'Run Query' context menu within the assembly explorer. Moreover, the queries can be loaded, modified, and saved using the built-in editor, allowing very specific user customization and sharing of queries. The PowerCommands add-in contains many other useful utilities. For example, you can open an item using an external application, work with enumeration bit flags, or generate assembly binding redirect files. You can see Bart's earlier post for a more complete list. .NET Reflector VS .NET Reflector VS adds a brand new Reflector object browser into Visual Studio to save you time opening .NET Reflector separately and browsing for an object. A 'Decompile and Explore' option is also added to the context menu of references in the Solution Explorer, so you don't need to leave Visual Studio to look through decompiled code. We've also added some simple navigation features to allow you to move through the decompiled code as quickly and easily as you can in .NET Reflector. When this is selected, the add-in decompiles the given assembly, Once the decompilation has finished, a clone of the Reflector assembly explorer can be used inside Visual Studio. When Reflector generates the source code, it records the location information. You can therefore navigate from the source file to other decompiled source using the 'Go To Definition' context menu item. This then takes you to the definition in another decompiled assembly. .NET Reflector VSPro .NET Reflector VSPro builds on the features in .NET Reflector VS to add the ability to debug any source code you decompile. When you decompile with .NET Reflector VSPro, a matching .pdb is generated, so you can use Visual Studio to debug the source code as if it were part of the project. You can now use all the standard debugging techniques that you are used to in the Visual Studio debugger, and step through decompiled code as if it were your own. Again, you can select assemblies for decompilation. They are then decompiled. And then you can debug as if they were one of your own source code files. The future of .NET Reflector As I have mentioned throughout this post, most of the new features in version 7 are exploratory steps and we will be watching feedback closely. Although we don't want to speculate now about any other new features or bugs that will or won't be fixed in the next few versions of .NET Reflector, Bart has mentioned in a previous post that there are lots of improvements we intend to make. We plan to do this with great care and without taking anything away from the simplicity of the core product. User experience is something that we pride ourselves on at Red Gate, and it is clear that Reflector is still a long way off our usual standards. We plan for the next few versions of Reflector to be worked on by some of our top usability specialists who have been involved with our other market-leading products such as the ANTS Profilers and SQL Compare. I re-iterate the need for the really great simple mode in .NET Reflector to remain intact regardless of any other improvements we are planning to make. I really hope that you enjoy using some of the new features in version 7 and that Reflector continues to be your favourite .NET development tool for a long time to come.

    Read the article

  • I have a problem with a AE1200 Cisco/Linksys Wireless-N USB adapter having stopped working after I ran the update manager in Ubuntu 12.04

    - by user69670
    Here is the problem, I use a Cisco/Linksys AE1200 wireless network adapter to connect my desktop to a public wifi internet connection. I use ndiswrapper to use the windows driver and it had been working fine for me untill I ran the update manager overnight a few days ago. When I woke up it was asking for the normal computer restart to implement the changes but after rebooting the computer, the wireless adapter did not work, the status light on the adapter did not light up even though ubuntu recognizes it is there and according to ndiswrapper the drivers are loaded and the hardware is present. the grep command is being a bitch for some unknown reason today so this will be long sorry Output from "lspci": 00:00.0 Host bridge: Advanced Micro Devices [AMD] nee ATI Radeon Xpress 200 Host Bridge (rev 01) 00:01.0 PCI bridge: Advanced Micro Devices [AMD] nee ATI RS480 PCI Bridge 00:12.0 SATA controller: Advanced Micro Devices [AMD] nee ATI SB600 Non-Raid-5 SATA 00:13.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB600 USB (OHCI0) 00:13.1 USB controller: Advanced Micro Devices [AMD] nee ATI SB600 USB (OHCI1) 00:13.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB600 USB (OHCI2) 00:13.3 USB controller: Advanced Micro Devices [AMD] nee ATI SB600 USB (OHCI3) 00:13.4 USB controller: Advanced Micro Devices [AMD] nee ATI SB600 USB (OHCI4) 00:13.5 USB controller: Advanced Micro Devices [AMD] nee ATI SB600 USB Controller (EHCI) 00:14.0 SMBus: Advanced Micro Devices [AMD] nee ATI SBx00 SMBus Controller (rev 13) 00:14.1 IDE interface: Advanced Micro Devices [AMD] nee ATI SB600 IDE 00:14.3 ISA bridge: Advanced Micro Devices [AMD] nee ATI SB600 PCI to LPC Bridge 00:14.4 PCI bridge: Advanced Micro Devices [AMD] nee ATI SBx00 PCI to PCI Bridge 01:05.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RC410 [Radeon Xpress 200] 02:02.0 Communication controller: Conexant Systems, Inc. HSF 56k Data/Fax Modem 02:03.0 Multimedia audio controller: Creative Labs CA0106 Soundblaster 02:05.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) Output from "lsusb": Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 009: ID 13b1:0039 Linksys AE1200 802.11bgn Wireless Adapter [Broadcom BCM43235] Bus 003 Device 002: ID 045e:0053 Microsoft Corp. Optical Mouse Bus 004 Device 002: ID 1043:8006 iCreate Technologies Corp. Flash Disk 32-256 MB Output from "ifconfig": eth0 Link encap:Ethernet HWaddr 00:19:21:b6:af:7c UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:20 Base address:0xb400 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:13232 errors:0 dropped:0 overruns:0 frame:0 TX packets:13232 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1084624 (1.0 MB) TX bytes:1084624 (1.0 MB) Output from "iwconfig": lo no wireless extensions. eth0 no wireless extensions. Output from "lsmod": Module Size Used by nls_iso8859_1 12617 1 nls_cp437 12751 1 vfat 17308 1 fat 55605 1 vfat uas 17828 0 usb_storage 39646 1 nls_utf8 12493 1 udf 84366 1 crc_itu_t 12627 1 udf snd_ca0106 39279 2 snd_ac97_codec 106082 1 snd_ca0106 ac97_bus 12642 1 snd_ac97_codec snd_pcm 80845 2 snd_ca0106,snd_ac97_codec rfcomm 38139 0 snd_seq_midi 13132 0 snd_rawmidi 25424 2 snd_ca0106,snd_seq_midi bnep 17830 2 parport_pc 32114 0 bluetooth 158438 10 rfcomm,bnep ppdev 12849 0 snd_seq_midi_event 14475 1 snd_seq_midi snd_seq 51567 2 snd_seq_midi,snd_seq_midi_event snd_timer 28931 2 snd_pcm,snd_seq snd_seq_device 14172 3 snd_seq_midi,snd_rawmidi,snd_seq snd 62064 11 snd_ca0106, snd_ac97_codec,snd_pcm,snd_rawj9fe snd_ca0106,snd_ac97_codec,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 14635 1 snd snd_page_alloc 14108 2 snd_ca0106,snd_pcm sp5100_tco 13495 0 i2c_piix4 13093 0 radeon 733693 3 ttm 65344 1 radeon drm_kms_helper 45466 1 radeon drm 197692 5 radeon,ttm,drm_kms_helper i2c_algo_bit 13199 1 radeon mac_hid 13077 0 shpchp 32325 0 ati_agp 13242 0 lp 17455 0 parport 40930 3 parport_pc,ppdev,lp usbhid 41906 0 hid 77367 1 usbhid 8139too 23283 0 8139cp 26759 0 pata_atiixp 12999 1 Output from "sudo lshw -C network": *-network description: Ethernet interface product: RTL-8139/8139C/8139C+ vendor: Realtek Semiconductor Co., Ltd. physical id: 5 bus info: pci@0000:02:05.0 logical name: eth0 version: 10 serial: 00:19:21:b6:af:7c size: 10Mbit/s capacity: 100Mbit/s width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 10 0bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=8139too driverversion=0.9.28 duplex=half latency=64 link=no maxlatency=64 mingnt=32 multicast=yes port=MII speed=10Mbit/s resources: irq:20 ioport:b400(size=256) memory:ff5fdc00-ff5fdcff Output from "iwlist scan": lo Interface doesn't support scanning. eth0 Interface doesn't support scanning. Output from "lsb_release -d": Ubuntu 12.04 LTS Output from "uname -mr": 3.2.0-24-generic-pae i686 Output from "sudo /etc/init.d/networking restart": * Running /etc/init.d/networking restart is deprecated because it may not enable again some interfaces * Reconfiguring network interfaces... [ OK ]

    Read the article

  • The standards that fail us and the intellectual bubble

    - by Jeff
    There has been a great deal of noise in the techie community about standards, and a sudden and unexplainable hate for Flash. This noise isn't coming from consumers... the countless soccer moms, teens and your weird uncle Bob, it's coming from the people who build (or at least claim to build) the stuff those consumers consume. If you could survey the position of consumers on the topic, they'd likely tell you that they just want stuff on the Web to work.The noise goes something like this: Web standards are the correct and right thing to use across the Intertubes, and anything not a part of those standards (Flash) is bad. Furthermore, the more recent noise is centered around the idea that HTML 5, along with Javascript, is the right thing to use. The arguments against Flash are, well, the truth is I haven't seen a good argument. I see anecdotal nonsense about high CPU usage and things I'd never think to check when I'm watching Piano Cat on YouTube, but these aren't arguments to me. Sure, I've seen it crash a browser a few times, but it's totally rare.But let's go back to standards. Yes, standards have played an important role in establishing the ubiquity of the Web. The protocols themselves, TCP/IP and HTTP, have been critical. HTML, which has served us well for a very long time, established an incredible foundation. Javascript did an OK job, and thanks to clever programmers writing great frameworks like JQuery, is becoming more and more useful. CSS is awful (there, I said it, I feel SO much better), and I'll never understand why it's so disconnected and different from anything else. It doesn't help that it's so widely misinterpreted by different browsers. Still, there's no question that standards are a good thing, and they've been good for the Web, consumers and publishers alike.HTML 4 has been with us for more than a decade. In Web years, that might as well be 80. HTML 5, contrary to popular belief, is not a standard, and likely won't be for many years to come. In fact, the Web hasn't really evolved at all in terms of its standards. The tools that generate the standard markup and script have, but at the end of the day, we're still living with standards that are more than ten years old. The "official" standards process has failed us.The Web evolved anyway, and did not wait for standards bodies to decide what to do next. It evolved in part because Macromedia, then Adobe, kept evolving Flash. In the earlier days, it mostly just did obnoxious splash pages, but then it started doing animation, and then rich apps as they added form input. Eventually it found its killer app: video. Now more than 95% of browsers have Flash installed. Consumers are better for it.But I'll do it one better... I'll go out on a limb and say that Flash is a standard. If it's that pervasive, I don't care what you tell me, it's a standard. Just because a company owns it doesn't mean that it's evil or not a standard. And hey, it pains me to say that as a developer, because I think the dev tools are the suck (more on that in a minute). But again, consumers don't care. They don't even pay for Flash. The bottom line is that if I put something Flash based on the Internet, it's likely that my audience will see it.And what about the speed of standards owned by a company? Look no further than Silverlight. Silverlight 2 (which I consider the "real" start to the story) came out about a year and a half ago. Now version 4 is out, and it has come a very long way in its capabilities. If you believe Riastats.com, more than half of browsers have it now. It didn't have to wait for standards bodies and nerds drafting documents, it's out today. At this rate, Silverlight will be on version 6 or 7 by the time HTML 5 is a ratified standard.Back to the noise, one of the things that has continually disappointed me about this profession is the number of people who get stuck in an intellectual bubble, color it with dogmatic principles, and completely ignore the actual marketplace where this stuff all has to live. We aren't machines; Binary thinking that forces us to choose between "open standards" and "proprietary lock-in" (the most loaded b.s. FUD term evar) isn't smart at all. The truth is that the <object> tag has allowed us to build incredible stuff on top of the old standards, and consumers have benefitted greatly. Consumer desire, capitalism, and yes, standards ratified by nerds who think about this stuff for years have all played a role in the broad adoption of the Interwebs.We could all do without the noise. At the end of the day, I'm going to build stuff for the Web that's good for my users, and I'm not going to base my decisions on a techie bubble religion. Imagine what the brilliant minds behind the noise could do for the Web if they joined me in that pursuit.

    Read the article

  • Cant get lm-sensors to load ATI Radeon temp and fan or output all settings

    - by woody
    New to Linux and having minor issues :/ . I followed this guide initially but did not recieve the proper output and did not show my ATI Radeon HD 5000 temp or fan speed. Then used this guide, same problems exhibited. No issues installing and no errors. I think its not reading i2c for some reason. The proprietary driver is installed and functioning correctly according fglrxinfo. I can use aticonfig commands and view both temp and fan. Any ideas on how to get the ATI Radeon sensors working under 'sensors'? When i run 'sudo sensors-detect' this is my ouput # sensors-detect revision 5984 (2011-07-10 21:22:53 +0200) # System: LENOVO IdeaPad Y560 (laptop) # Board: Lenovo KL3 This program will help you determine which kernel modules you need to load to use lm_sensors most effectively. It is generally safe and recommended to accept the default answers to all questions, unless you know what you're doing. Some south bridges, CPUs or memory controllers contain embedded sensors. Do you want to scan for them? This is totally safe. (YES/no): y Silicon Integrated Systems SIS5595... No VIA VT82C686 Integrated Sensors... No VIA VT8231 Integrated Sensors... No AMD K8 thermal sensors... No AMD Family 10h thermal sensors... No AMD Family 11h thermal sensors... No AMD Family 12h and 14h thermal sensors... No AMD Family 15h thermal sensors... No AMD Family 15h power sensors... No Intel digital thermal sensor... Success! (driver `coretemp') Intel AMB FB-DIMM thermal sensor... No VIA C7 thermal sensor... No VIA Nano thermal sensor... No Some Super I/O chips contain embedded sensors. We have to write to standard I/O ports to probe them. This is usually safe. Do you want to scan for Super I/O sensors? (YES/no): y Probing for Super-I/O at 0x2e/0x2f Trying family `National Semiconductor/ITE'... Yes Found unknown chip with ID 0x8502 Probing for Super-I/O at 0x4e/0x4f Trying family `National Semiconductor/ITE'... No Trying family `SMSC'... No Trying family `VIA/Winbond/Nuvoton/Fintek'... No Trying family `ITE'... No Some hardware monitoring chips are accessible through the ISA I/O ports. We have to write to arbitrary I/O ports to probe them. This is usually safe though. Yes, you do have ISA I/O ports even if you do not have any ISA slots! Do you want to scan the ISA I/O ports? (YES/no): y Probing for `National Semiconductor LM78' at 0x290... No Probing for `National Semiconductor LM79' at 0x290... No Probing for `Winbond W83781D' at 0x290... No Probing for `Winbond W83782D' at 0x290... No Lastly, we can probe the I2C/SMBus adapters for connected hardware monitoring devices. This is the most risky part, and while it works reasonably well on most systems, it has been reported to cause trouble on some systems. Do you want to probe the I2C/SMBus adapters now? (YES/no): y Using driver `i2c-i801' for device 0000:00:1f.3: Intel 3400/5 Series (PCH) Now follows a summary of the probes I have just done. Just press ENTER to continue: Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9) To load everything that is needed, add this to /etc/modules: #----cut here---- # Chip drivers coretemp #----cut here---- If you have some drivers built into your kernel, the list above will contain too many modules. Skip the appropriate ones! Do you want to add these lines automatically to /etc/modules? (yes/NO) My output for 'sensors' is: acpitz-virtual-0 Adapter: Virtual device temp1: +58.0°C (crit = +100.0°C) coretemp-isa-0000 Adapter: ISA adapter Core 0: +56.0°C (high = +84.0°C, crit = +100.0°C) Core 1: +57.0°C (high = +84.0°C, crit = +100.0°C) Core 2: +58.0°C (high = +84.0°C, crit = +100.0°C) Core 3: +57.0°C (high = +84.0°C, crit = +100.0°C) and my '/etc/modules' is: # /etc/modules: kernel modules to load at boot time. # # This file contains the names of kernel modules that should be loaded # at boot time, one per line. Lines beginning with "#" are ignored. lp rtc # Generated by sensors-detect on Fri Nov 30 23:24:31 2012 # Chip drivers coretemp

    Read the article

  • Fun With the Chrome JavaScript Console and the Pluralsight Website

    - by Steve Michelotti
    Originally posted on: http://geekswithblogs.net/michelotti/archive/2013/07/24/fun-with-the-chrome-javascript-console-and-the-pluralsight-website.aspxI’m currently working on my third course for Pluralsight. Everyone already knows that Scott Allen is a “dominating force” for Pluralsight but I was curious how many courses other authors have published as well. The Pluralsight Authors page - http://pluralsight.com/training/Authors – shows all 146 authors and you can click on any author’s page to see how many (and which) courses they have authored. The problem is: I don’t want to have to click into 146 pages to get a count for each author. With this in mind, I figured I could write a little JavaScript using the Chrome JavaScript console to do some “detective work.” My first step was to figure out how the HTML was structured on this page so I could do some screen-scraping. Right-click the first author - “Inspect Element”. I can see there is a primary <div> with a class of “main” which contains all the authors. Each author is in an <h3> with an <a> tag containing their name and link to their page:     This web page already has jQuery loaded so I can use $ directly from the console. This allows me to just use jQuery to inspect items on the current page. Notice this is a multi-line command. In order to use multiple lines in the console you have to press SHIFT-ENTER to go to the next line:     Now I can see I’m extracting data just fine. At this point I want to follow each URL. Then I want to screen-scrape this next page to see how many courses each author has done. Let’s take a look at the author detail page:       I can see we have a table (with a css class of “course”) that contains rows for each course authored. This means I can get the number of courses pretty easily like this:     Now I can put this all together. Back on the authors page, I want to follow each URL, extract the returned HTML, and grab the count. In the code below, I simply use the jQuery $.get() method to get the author detail page and the “data” variable that is in the callback contains the HTML. A nice feature of jQuery is that I can simply put this HTML string inside of $() and I can use jQuery selectors directly on it in conjunction with the find() method:     Now I’m getting somewhere. I have every Pluralsight author and how many courses each one has authored. But that’s not quite what I’m after – what I want to see are the authors that have the MOST courses in the library. What I’d like to do is to put all of the data in an array and then sort that array descending by number of courses. I can add an item to the array after each author detail page is returned but the catch here is that I can’t perform the sort operation until ALL of the author detail pages have executed. The jQuery $.get() method is naturally an async method so I essentially have 146 async calls and I don’t want to perform my sort action until ALL have completed (side note: don’t run this script too many times or the Pluralsight servers might think your an evil hacker attempting a DoS attack and deny you). My C# brain wants to use a WaitHandle WaitAll() method here but this is JavaScript. I was able to do this by using the jQuery Deferred() object. I create a new deferred object for each request and push it onto a deferred array. After each request is complete, I signal completion by calling the resolve() method. Finally, I use a $.when.apply() method to execute my descending sort operation once all requests are complete. Here is my complete console command: 1: var authorList = [], 2: defList = []; 3: $(".main h3 a").each(function() { 4: var def = $.Deferred(); 5: defList.push(def); 6: var authorName = $(this).text(); 7: var authorUrl = $(this).attr('href'); 8: $.get(authorUrl, function(data) { 9: var courseCount = $(data).find("table.course tbody tr").length; 10: authorList.push({ name: authorName, numberOfCourses: courseCount }); 11: def.resolve(); 12: }); 13: }); 14: $.when.apply($, defList).then(function() { 15: console.log("*Everything* is complete"); 16: var sortedList = authorList.sort(function(obj1, obj2) { 17: return obj2.numberOfCourses - obj1.numberOfCourses; 18: }); 19: for (var i = 0; i < sortedList.length; i++) { 20: console.log(authorList[i]); 21: } 22: });   And here are the results:     WOW! John Sonmez has 44 courses!! And Matt Milner has 29! I guess Scott Allen isn’t the only “dominating force”. I would have assumed Scott Allen was #1 but he comes in as #3 in total course count (of course Scott has 11 courses in the Top 50, and 14 in the Top 100 which is incredible!). Given that I’m in the middle of producing only my third course, I better get to work!

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >