Search Results

Search found 3574 results on 143 pages for 'limited atonement'.

Page 13/143 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • SQL SERVER – Service Broker and CAP_CPU_PERCENT – Limiting SQL Server Instances to CPU Usage

    - by pinaldave
    I have mentioned several times on this blog that the best part of blogging is the questions I receive from readers. They are often very interesting. The questions from readers give me a good idea what other readers might be thinking as well. After reading my earlier article Simple Example to Configure Resource Governor – Introduction to Resource Governor – I received an email from a reader and we exchanged a few emails. After exchanging emails we both figured out what is going on. It was indeed interesting and reader suggested to that I should blog about it.  I asked for permission to publish his name but he does not like the attention so we will just call him Jeff. I have converted our emails into chat for easy consumption. Jeff: Your script does not work at all. I think either there is a bug in SQL Server. Pinal: Would you please explain in detail? Jeff: Your code does not limit the CPU usage? Pinal: How did you measure it? Jeff: Well, we have third party tools for it but let us say I have limited the resources for Reporting Services and used your script described in your blog. After that I ran only reporting service workload the CPU is still used more than 100% and it is not limited to 30% as described in your script. Clearly something is wrong somewhere. Pinal: Did you say you ONLY ran reporting server load? Jeff: Yeah, to validate I ran ONLY reporting server load and CPU did not throttle at 30% as per your script. Pinal: Oh! I get it here is the answer - CAP_CPU_PERCENT = 30. Use it. Jeff: What is that, I think your earlier script says it will throttle the Reporting Service workload and Application/OLTP workload and balance it. Pinal: Exactly, that is correct. Jeff: You need to write more in email buddy! Just like your blogs, your answers do not make sense! No Offense! Pinal: Hmm…feedback well taken. Let me try again. In SQL Server 2012 there are a few enhancements with regards to SQL Server Resource Governor. One of the enhancement is how the resources are allocated. Let me explain you with examples. Configuration: [Read Earlier Post] Reporting Workload: MIN_CPU_PERCENT=0, MAX_CPU_PERCENT=30 Application/OLTP Workload: MIN_CPU_PERCENT=50, MAX_CPU_PERCENT=100 Example 1: If there is only Reporting Workload on the server: SQL Server will not limit usage of CPU to only 30% workload but SQL Server instance will use all available CPU (if needed). In another word in this scenario it will use more than 30% CPU. Example 2: If there is Reproting Workload and heavy Application/OLTP workload: SQL Server will allocate a maximum of 30% CPU resources to Reporting Workload and allocate remaining resources to heavy application/OLTP workload. The reason for this enhancement is for better utilization of the resources. Let us think, if there is only single workload, which we have limited to max CPU usage to 30%. The other unused available CPU resources is now wasted. In this situation SQL Server allows the workload to use more than 30% resources leading to overall improved/optimized performance. However, in the case of multiple workload where lots of resources are needed the limits specified in MAX_CPU_PERCENT are acknowledged. Example 3: If there is a situation where the max CPU workload has to be enforced: This is a very interesting scenario, in the case when the max CPU workload has to be enforced irrespective of the workload and enhanced algorithm, the keyword CAP_CPU_PERCENT is essential. It specifies a hard cap on the CPU bandwidth that all requests in the resource pool will receive. It will never let CPU usage for reporting workload to go over 30% in our case. You can use the key word as follows: -- Creating Resource Pool for Report Server CREATE RESOURCE POOL ReportServerPool WITH ( MIN_CPU_PERCENT=0, MAX_CPU_PERCENT=30, CAP_CPU_PERCENT=40, MIN_MEMORY_PERCENT=0, MAX_MEMORY_PERCENT=30) GO Notice that there is MAX_CPU_PERCENT=30 and CAP_CPU_PERCENT=40, what it means is that when SQL Server Instance is under heavy load under different workload it will use the maximum CPU at 30%. However, when the SQL Server instance is not under workload it will go over the 30% limit. However, as CAP_CPU_PERCENT is set to 40, it will not go over 40% in any case by limiting the usage of CPU. CAP_CPU_PERCENT puts a hard limit on the resources usage by workload. Jeff: Nice Pinal, you should blog about it. [A day passes by] Pinal: Jeff, it is done! Click here to read it. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Service Broker

    Read the article

  • Data Mining Resources

    - by Dejan Sarka
    There are many different types of analyses, each one with its own pros and cons. Relational reports have a predefined structure, and end users cannot change it. They are simple to use for end users. Reports can use real-time data and snapshots of data to show the state of a report at specific points in time. One of the drawbacks is that report authoring is limited to IT pros and advanced users. Any kind of dynamic restructuring is very limited. If real-time data is used for a report, the report has a negative impact on the performance of the source system. Processing of the reports might be slow because the data comes from relational database management systems, which are not optimized for reporting only. If you create a semantic model of your data, your end users can create ad-hoc report structures. However, the development is more complex because a developer is needed to create these semantic models. For OLAP, you typically use specialized database management systems. You get lightning speed of analyses. End users can use rich and thin clients to interactively change the structure of the report. Typically, they do it graphically. However, the development of an OLAP system is many times quite complex. It involves the preparation and maintenance of an enterprise data warehouse and OLAP cubes. In order to exploit the possibility of real-time restructuring of reports, the users must be both active and educated. The data is usually stale, as it is loaded into data warehouses and OLAP cubes with a scheduled process. With data mining, a structure is not selected in advance; it searches for the structure. As a result, data mining can give you the most valuable results because you can discover patterns you did not expect. A data mining model structure is limited only by the attributes that you use to train the model. One of the drawbacks is that a lot of knowledge is needed for a successful data mining project. End users have to understand the results. Subject matter experts and IT professionals need to understand business problem thoroughly. The development might be sometimes even more complex than the development of OLAP cubes. Each type of analysis has its own place in an enterprise system. SQL Server has tools for all kinds of analyses. However, data mining is the most advanced way of analyzing the data; this is the “I” in BI. In order to get the most out of it, you need to learn quite a lot. In this blog post, I am gathering together resources for learning, including forthcoming events. Books Multiple authors: SQL Server MVP Deep Dives – I wrote an introductory data mining chapter there. Erik Veerman, Teo Lachev and Dejan Sarka: MCTS Self-Paced Training Kit (Exam 70-448): Microsoft SQL Server 2008 - Business Intelligence Development and Maintenance – you can find a good overview of a complete BI solution, including data mining, in this book. Jamie MacLennan, ZhaoHui Tang, and Bogdan Crivat: Data Mining with Microsoft SQL Server 2008 – can’t miss this book if you want to mine your data with SQL Server tools. Michael Berry, Gordon Linoff: Mastering Data Mining: The Art and Science of Customer Relationship Management – data mining from both, business and technical perspective. Dorian Pyle: Data Preparation for Data Mining – an in-depth book about data preparation. Thomas and Ronald Wonnacott: Introductory Statistics – if you thought that you could get away without statistics, then you are not serious about data mining. Jiawei Han and Micheline Kamber: Data Mining Concepts and Techniques – in-depth explanation of the most popular data mining algorithms. Michael Berry and Gordon Linoff: Data Mining Techniques – another book that explains data mining algorithms, more fro a business perspective. Paolo Guidici: Applied Data Mining – very mathematical book, only if you enjoy statistics and mathematics in general. Forthcoming presentations I am presenting two data mining related sessions during the PASS Summit in Charlotte, NC: Wednesday, October 16th, 2013 - Fraud Detection: Notes from the Field – I am showing how to use data mining for a specific business problem. The presentation is based on real-life projects. Friday, October 18th: Excel 2013 Advanced Analytics – I am focusing on Excel Data Mining Add-ins, and how to use them together with Power Pivot and other add-ins. This is the most you can get out of Excel. Sinergija 2013, Belgrade, Serbia Tuesday, October 22nd: Excel 2013 Analytics to the Max – another presentation focusing on the most advanced analytics you can get in Excel. SQL Rally Amsterdam, Netherlands Thursday, November 7th: Advanced Analytics in Excel 2013 – and again I am presenting about data mining in Excel. Why three different titles for the same presentation? I don’t know, I guess I forgot the name I proposed every time right after I sent the proposal. Courses Data Mining with SQL Server 2012 – I wrote a 3-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. OK, now you know: no more excuses, start learning data mining, get the most out of your data

    Read the article

  • Oracle Linux and Oracle VM pricing guide

    - by wcoekaer
    A few days ago someone showed me a pricing guide from a Linux vendor and I was a bit surprised at the complexity of it. Especially when you look at larger servers (4 or 8 sockets) and when adding virtual machine use into the mix. I think we have a very compelling and simple pricing model for both Oracle Linux and Oracle VM. Let me see if I can explain it in 1 page, not 10 pages. This pricing information is publicly available on the Oracle store, I am using the current public list prices. Also keep in mind that this is for customers using non-oracle x86 servers. When a customer purchases an Oracle x86 server, the annual systems support includes full use (all you can eat) of Oracle Linux, Oracle VM and Oracle Solaris (no matter how many VMs you run on that server, in case you deploy guests on a hypervisor). This support level is the equivalent of premier support in the list below. Let's start with Oracle VM (x86) : Oracle VM support subscriptions are per physical server on which you deploy the Oracle VM Server product. (1) Oracle VM Premier Limited - 1- or 2 socket server : $599 per server per year (2) Oracle VM Premier - more than 2 socket server (4, or 8 or whatever more) : $1199 per server per year The above includes the use of Oracle VM Manager and Oracle Enterprise Manager Cloud Control's Virtualization management pack (including self service cloud portal, etc..) 24x7 support, access to bugfixes, updates and new releases. It also includes all options, live migrate, dynamic resource scheduling, high availability, dynamic power management, etc If you want to play with the product, or even use the product without access to support services, the product is freely downloadable from edelivery. Next, Oracle Linux : Oracle Linux support subscriptions are per physical server. If you plan to run Oracle Linux as a guest on Oracle VM, VMWare or Hyper-v, you only have to pay for a single subscription per system, we do not charge per guest or per number of guests. In other words, you can run any number of Oracle Linux guests per physical server and count it as just a single subscription. (1) Oracle Linux Network Support - any number of sockets per server : $119 per server per year Network support does not offer support services. It provides access to the Unbreakable Linux Network and also offers full indemnification for Oracle Linux. (2) Oracle Linux Basic Limited Support - 1- or 2 socket servers : $499 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management. It includes ocfs2 as a clustered filesystem. (3) Oracle Linux Basic Support - more than 2 socket server (4, or 8 or more) : $1199 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management. It includes ocfs2 as a clustered filesystem (4) Oracle Linux Premier Limited Support - 1- or 2 socket servers : $1399 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management, XFS filesystem support. It also offers Oracle Lifetime support, backporting of patches for critical customers in previous versions of package and ksplice zero-downtime updates. (5) Oracle Linux Premier Support - more than 2 socket servers : $2299 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management, XFS filesystem support. It also offers Oracle Lifetime support, backporting of patches for critical customers in previous versions of package and ksplice zero-downtime updates. (6) Freely available Oracle Linux - any number of sockets You can freely download Oracle Linux, install it on any number of servers and use it for any reason, without support, without right to use of these extra features like Oracle Clusterware or ksplice, without indemnification. However, you do have full access to all errata as well. Need support? then use options (1)..(5) So that's it. Count number of 2 socket boxes, more than 2 socket boxes, decide on basic or premier support level and you are done. You don't have to worry about different levels based on how many virtual instance you deploy or want to deploy. A very simple menu of choices. We offer, inclusive, Linux OS clusterware, Linux OS Management, provisioning and monitoring, cluster filesystem (ocfs), high performance filesystem (xfs), dtrace, ksplice, ofed (infiniband stack for high performance networking). No separate add-on menus. NOTE : socket/cpu can have any number of cores. So whether you have a 4,6,8,10 or 12 core CPU doesn't matter, we count the number of physical CPUs.

    Read the article

  • Designing for the future

    - by Dennis Vroegop
    User interfaces and user experience design is a fast moving field. It’s something that changes pretty quick: what feels fresh today will look outdated tomorrow. I remember the day I first got a beta version of Windows 95 and I felt swept away by the user interface of the OS. It felt so modern! If I look back now, it feels old. Well, it should: the design is 17 years old which is an eternity in our field. Of course, this is not limited to UI. Same goes for many industries. I want you to think back of the cars that amazed you when you were in your teens (if you are in your teens then this may not apply to you). Didn’t they feel like part of the future? Didn’t you think that this was the ultimate in designs? And aren’t those designs hopelessly outdated today (again, depending on your age, it may just be me)? Let’s review the Win95 design: And let’s compare that to Windows 7: There are so many differences here, I wouldn’t even know where to start explaining them. The general feeling however is one of more usability: studies have shown Windows 7 is much easier to understand for new users than the older versions of Windows did. Of course, experienced Windows users didn’t like it: people are usually afraid of changes and like to stick to what they know. But for new users this was a huge improvement. And that is what UX design is all about: make a product easier to use, with less training required and make users feel more productive. Still, there are areas where this doesn’t hold up. There are plenty examples of designs from the past that are still fresh today. But if you look closely at them, you’ll notice some subtle differences. This differences are what keep the designs fresh. A good example is the signs you’ll find on the road. They haven’t changed much over the years (otherwise people wouldn’t recognize them anymore) but they have been changing gradually to reflect changes in traffic. The same goes for computer interfaces. With each new product or version of a product, the UI and UX is changed gradually. Every now and then however, a bigger change is needed. Just think about the introduction of the Ribbon in Microsoft Office 2007: the whole UI was redesigned. A lot of old users (not in age, but in times of using older versions) didn’t like it a bit, but new users or casual users seem to be more efficient using the product. Which, of course, is exactly the reason behind the changes. I believe that a big engine behind the changes in User Experience design has been the web. In the old days (i.e. before the explosion of the internet) user interface design in Windows applications was limited to choosing the margins between your battleship gray buttons. When the web came along, and especially the web 2.0 where the browsers started to act more and more as application platforms, designers stepped in and made a huge impact. In the browser, they could do whatever they wanted. In the beginning this was limited to the darn blink tag but gradually people really started to think about UX. Even more so: the design of the UI and the whole experience was taken away from the developers and put into the hands of people who knew what they were doing: UX designers. This caused some problems. Everyone who has done a web project in the early 2000’s must have had the same experience: the designers give you a set of Photoshop files and tell you to translate it to HTML. Which, of course, is very hard to do. However, with new tooling and new standards this became much easier. The latest version of HTML and CSS has taken the responsibility for the design away from the developers and placed them in the capable hands of the designers. And that’s where that responsibility belongs, after all, I don’t want a designer to muck around in my c# code just as much as he or she doesn’t want me to poke in the sites style definitions. This change in responsibilities resulted in good looking but more important: better thought out user interfaces in websites. And when websites became more and more interactive, people started to expect the same sort of look and feel from their desktop applications. But that didn’t really happen. Most business applications still have that battleship gray look and feel. Ok, they may use a different color but we’re not talking colors here but usability. Now, you may not be able to read the Dutch captions, but even if you did you wouldn’t understand what was going on. At least, not when you first see it. You have to scan the screen, read all the labels, see how they are related to the other elements on the screen and then figure out what they do. If you’re an experienced user of this application however, this might be a good thing: you know what to do and you get all the information you need in one single screen. But for most applications this isn’t the case. A lot of people only use their computer for a limited time a day (a weird concept for me, but it happens) and need it to get something done and then get on with their lives. For them, a user interface experience like the above isn’t working. (disclaimer: I just picked a screenshot, I am not saying this is bad software but it is an example of about 95% of the Windows applications out there). For the knowledge worker, this isn’t a problem. They use one or two systems and they know exactly what they need to do to achieve their goal. They don’t want any clutter on their screen that distracts them from their task, they just want to be as efficient as possible. When they know the systems they are very productive. The point is, how long does it take to become productive? And: could they be even more productive if the UX was better? Are there things missing that they don’t know about? Are there better ways to achieve what they want to achieve? Also: could a system be designed in such a way that it is not only much more easy to work with but also less tiring? in the example above you need to switch between the keyboard and mouse a lot, something that we now know can be very tiring. The goal of most applications (being client apps or websites on any kind of device) is to provide information. Information is data that when given to the right people, on the right time, in the right place and when it is correct adds value for that person (please, remember that definition: I still hear the statement “the information was wrong” which doesn’t make sense: data can be wrong, information cannot be). So if a system provides data, how can we make sure the chances of becoming information is as high as possible? A good example of a well thought-out system that attempts this is the Zune client. It is a very good application, and I think the UX is much better than it’s main competitor iTunes. Have a look at both: On the left you see the iTunes screenshot, on the right the Zune. As you notice, the Zune screen has more images but less chrome (chrome being visuals not part of the data you want to show, i.e. edges around buttons). The whole thing is text oriented or image oriented, where that text or image is part of the information you need. What is important is big, what’s less important is smaller. Yet, everything you need to know at that point is present and your attention is drawn immediately to what you’re trying to achieve: information about music. You can easily switch between the content on your machine and content on your Zune player but clicking on the image of the player. But if you didn’t know that, you’d find out soon enough: the whole UX is designed in such a way that it invites you to play around. So sooner or later (probably sooner) you’d click on that image and you would see what it does. In the iTunes version it’s harder to find: the discoverability is a lot lower. For inexperienced people the Zune player feels much more natural than the iTunes player, and they get up to speed a lot faster. How does this all work? Why is this UX better? The answer lies in a project from Microsoft with the codename (it seems to be becoming the official name though) “Metro”. Metro is a design language, based on certain principles. When they thought about UX they took a good long look around them and went out in search of metaphors. And they found them. The team noticed that signage in streets, airports, roads, buildings and so on are usually very clear and very precise. These signs give you the information you need and nothing more. It’s simple, clearly understood and fast to understand. A good example are airport signs. Airports can be intimidating places, especially for the non-experienced traveler. In the early 1990’s Amsterdam Airport Schiphol decided to redesign all the signage to make the traveller feel less disoriented. They developed a set of guidelines for signs and implemented those. Soon, most airports around the world adopted these ideas and you see variations of the Dutch signs everywhere on the globe. The signs are text-oriented. Yes, there are icons explaining what it all means for the people who can’t read or don’t understand the language, but the basic sign language is text. It’s clear, it’s high-contrast and it’s easy to understand. One look at the sign and you know where to go. The only thing I don’t like is the green sign pointing to the emergency exit, but since this is the default style for emergency exits I understand why they did this. If you look at the Zune UI again, you’ll notice the similarities. Text oriented, little or no icons, clear usage of fonts and all the information you need. This design language has a set of principles: Clean, light, open and fast Content, not chrome Soulful and alive These are just a couple of the principles, you can read the whole philosophy behind Metro for Windows Phone 7 here. These ideas seem to work. I love my Windows Phone 7. It’s easy to use, it’s clear, there’s no clutter that I do not need. It works for me. And I noticed it works for a lot of other people as well, especially people who aren’t as proficient with computers as I am. You see these ideas in a lot other places. Corning, a manufacturer of glass, has made a video of possible usages of their products. It’s their glimpse into the future. You’ll notice that a lot of the UI in the screens look a lot like what Microsoft is doing with Metro (not coincidentally Corning is the supplier for the Gorilla glass display surface on the new SUR40 device (or Surface v2.0 as a lot of people call it)). The idea behind this vision is that data should be available everywhere where you it. Systems should be available at all times and data is presented in a clear and light manner so that you can turn that data into information. You don’t need a lot of fancy animations that only distract from the data. You want the data and you want it fast. Have a look at this truly inspiring video that made: This is what I believe the future will look like. Of course, not everything is possible, or even desirable. But it is a nice way to think about the future . I feel very strongly about designing applications in such a way that they add value to the user. Designing applications that turn data into information. Applications that make the user feel happy to use them. So… when are you going to drop the battleship-gray designs? Tags van Technorati: surface,design,windows phone 7,wp7,metro

    Read the article

  • Resources for rails widget creation

    - by Kenji Crosland
    My rails app is about to go live and I'm trying to figure out how to create a widget based on data on the site. My Javascript is not too hot--limited to the O'reilly head first book on JS. I can read simple Javascript okay but when it comes to writing it I'm a little lost. That said, I hope to roll out a widget, maybe a wordpress plugin too soon after my app is launched. Does anyone know of any rails plugins, javascript templates, tutorials or books that can get me going? I did ask a similar question before here: But my limited knowledge of JS kept me from implementing the suggested answer properly.

    Read the article

  • Is Oracle AQ/Streams of any use in my situation?

    - by RenderIn
    I'm writing a workflow system that is driven entirely at each step by explicit human interaction. That is, a task is assigned to a person, that person selects from a few limited options {approve, reject, forward}, and then it is either sent along to the next person or terminated. Just curious of Oracle Streams/AQ has anything to offer over flat tables managed by regular web application code. The amount of processing after each action is fairly limited and the volume is not terribly high, so there's not really a need to throttle things by throwing them into a queue. What are some of the benefits of introducing a queue structure, or is it overkill for my situation?

    Read the article

  • Resources for widget creation

    - by Kenji Crosland
    My rails app is about to go live and I'm trying to figure out how to create a widget based on data on the site. My Javascript is not too hot--limited to the O'reilly head first book on JS. I can read simple Javascript okay but when it comes to writing it I'm a little lost. That said, I hope to roll out a widget, maybe a wordpress plugin too soon after my app is launched. Does anyone know of any rails plugins, javascript templates, tutorials or books that can get me going? I did ask a similar question before here: But my limited knowledge of JS kept me from implementing the suggested answer properly.

    Read the article

  • Dangers of Windows API and Administrator accounts?

    - by Brett Powell
    I wrote a game server plugin last night that allowed me to create a user account and set it as administrator, which is a huge problem. Of course the simple fix is to create a basic user account with limited privileges for the game servers, so they would not have access to do things like this. I wanted to find out if there's anything else in the Windows API that would create such a huge vulnerability though? I guess I want to just make sure that when the client's game servers accounts are moved to limited access accounts, we won't have to worry about any of them using the windows API to sabotage the machines. There is already enough exploits in the game itself to worry about, without having to worry about client's taking over the machines with plugins lol. Some of the questions relative would be... Can you disable/enable Remote Desktop from c++? Can you get a list of AD user groups from c++? (not that a user belongs to, but a complete list)

    Read the article

  • Is it any desktop wysiwyg editors for mediawiki / wiki available?

    - by Eye of Hell
    Hello. Mediawiki is very good, but for programming tasks editing it via web is not very handy since wysiwyg is very limited, pressing 'edit' + 'publish' on any small change and waiting for page loading kins of annoying. I have seen alot of desktop wikis (personal wikis) that are free from such problems. The best example is a 'wikidpad' that has a usage patter of 'focus, edit wiki in-place, minimize'. This is very handy for programming work where you need to make small changes to wiki and documentation during development, and documentation is written much more often than readed :). But all such desktops wikies are personal - they don't have any wiki sharing (or marginally limited support for it). So, maybe it's a desktop application exists that can connect to mediawiki and allows to view and edit it via a rich wysiwyg editor? Any hints are welcomed.

    Read the article

  • Script Speed vs Memory Usage

    - by Doug Neiner
    I am working on an image generation script in PHP and have gotten it working two ways. One way is slow but uses a limited amount of memory, the second is much faster, but uses 6x the memory . There is no leakage in either script (as far as I can tell). In a limited benchmark, here is how they performed: -------------------------------------------- METHOD | TOTAL TIME | PEAK MEMORY | IMAGES -------------------------------------------- One | 65.626 | 540,036 | 200 Two | 20.207 | 3,269,600 | 200 -------------------------------------------- And here is the average of the previous numbers (if you don't want to do your own math): -------------------------------------------- METHOD | TOTAL TIME | PEAK MEMORY | IMAGES -------------------------------------------- One | 0.328 | 540,036 | 1 Two | 0.101 | 3,269,600 | 1 -------------------------------------------- Which method should I use and why? I anticipate this being used by a high volume of users, with each user making 10-20 requests to this script during a normal visit. I am leaning toward the faster method because though it uses more memory, it is for a 1/3 of the time and would reduce the number of concurrent requests.

    Read the article

  • How are operating Systems created? Which language is chosen for coding?

    - by Nitesh Panchal
    Hello, How is a basic OS created? In which language do programmers code for OS? C or Assembly? or which? Also, Assembly has limited instruction set like mov etc. So how can anybody create OS in assembly? and even C has limited functionality. But it is said to be the mother of all languages. How can anybody create a full OS with stunning graphics in C? It's simply out of my mind. And what is time duration it takes for a very basic OS to be created? unlike Windows 7 :p

    Read the article

  • How to find/display the Upload File limits on IIS with ASP.NET?

    - by NVRAM
    I have a web service on which the end users will be uploading ZIP archives that can be very large (one test file is over 200MB). I'd like to handle oversized files proactively and size-limited upload failures gracefully. Since the web app will be deployed on customers' machines, so I cannot easily ensure that the configuration matches any fixed size. I've documented how to use the appcmd command for them to set the requestLimits.maxAllowedContentLength value beyond the 30MB default. But I'd like to handle it in the web app; I'm hoping for two things: To show the current limit on the page where they initiate the file upload, something along the lines of: Each file upload is limited to 15MB. If your archive is larger, (etc., etc., etc.) To give a meaningful error when that size is exceeded. Currently, it takes a long time for the data to be sent, and then I see a misleading 404 page. Any thoughts?

    Read the article

  • Castle Windsor Weak Typed Factory

    - by JeffN825
    In a very very limited number of scenarios, I need to go from an unknown Type (at compile time) to an instance of the object registered for that type. For the most part, I use typed factories and I know the type I want to resolve at compile time...so I inject a Func<IMyType> into a constructor ...but in these limited number of scenarios, in order to avoid a direct call to the container (and thus having to reference Windsor from the library, which is an anti-pattern I'd like to avoid), I need to inject a Func<Type,object>...which I want to internally container.Resolve(type) for the Type parameter of the Func. Does anyone have some suggestions on the easiest/most straightforward way of setting this up? I tried the following, but with this setup, I end up bypassing the regular TypedFactoryFacility altogether which is definitely not what I want: Kernel.Register(Component.For(typeof (Func<Type, object>)).LifeStyle.Singleton.UsingFactoryMethod( (kernel, componentModel, creationContext) => kernel.Resolve(/* not sure what to put here... */))); Thanks in advance for any assistance.

    Read the article

  • Comparing two speech sounds

    - by JessicaB
    I need to be able to determine if two sounds are very similar. The goal is to have a very limited vocabulary (10 or 15) of short one or two syllable words, then compare a captured sound to determine if it is one of those items with all the usual variability in environmental and capture conditions. The idea is that the user can issue a few simple commands by voice instead of keyboard or mouse. Does anyone know the best approach to this? I don't want to do full blown speech recognition, just something much more limited.

    Read the article

  • Open-source and client-side IRC-Client

    - by user125197
    I'm administrating a homepage with an IRC-Webclient. At the moment, it uses a Java-client, modified to be SSL-compatible and compatible with whatever design I want to use. The usual PJIRC thing, with users wishes and concerns implemented (I'll give you SSL-PJIRC if you want - no problem. The login-script will be improved for IE6-support this weekend, means write the object tag if an old IE is found, that's it. Rest runs perfectly well). But still, the better user experience would be a client that only requires the user to enable JavaScript. So, before I go into rewriting and customizing a complete Java-IRC-solution, I'd like to ask some other people - after researching for half a year of time. The requirements are: a) Free hosting, no cgi-bin is acceptable. A lot of hosters also have CGI supported, but it doesn't allow IRC-access. So, seems you are limited to x10-hosting for CGI:IRC. b) Solution must be open source (not free as in free beer, but free as in free speech - I want to be able to read every line of the code). c) Hosting cannot be limited to Windows-hosting. Free operation systems (anything with Linux, BSD, whatever - I think you get what I mean) must be possible. d) No server-side technology. I'm limited to everything that runs client-side only. (Well, a Java-Applet does...). d) Solution must support SSL. e) The side must be able to move. Means, you register to a new free hoster, load your things up via FileZilla and the thing simply runs. No server-concerning implementations needed. f) Old browsers must be supported. From all my research, there are two ways a) Java-Applet b) Use HTML5-Websockets (it is impossible to use the JavaScript-library I found, as it depends on ActiveX - means, Windows hosting). b) means "you are going to enter very unstable content". I worked with HTML5 for month, and, well ... Though, HTML5 also is not suitable for older browsers (old browser support is an absolutely necessary requirement). PHP by the way is a server side solution ... so no line of PHP. My idea now was a Java Applet, and a fallback which again is embedded and proprietary, but only requieres JavaScript (wsirc or Mibbit are great here, whilst I prefer wsirc ... and zooming fonts that are plainly to small, but worse, cannot read the code). So my question is - with free hosting, not installation on server, plain upload - do you see and open source, complelety client-side, ssl-compatible way to use or write a client? This wouldn't be my first programming project, I'm not afraid of the code. If there was a way and I had to write it, even from scratch, I'd do. From all I know, there sadly is none. But maybe some experienced admin has an idea? Best Wishes, yetanotheruser Sry my English isn't perfect. It's enough for programming at least.

    Read the article

  • SQL Server Editions and Integration Services

    The SQL Server 2005 and SQL Server 2008 product family has quite a few editions now, so what does this mean for SQL Server Integration Services? Starting from the bottom we have the free edition known as Express, and the entry level Workgroup edition, as well as the new Web edition. None of these three include the full SSIS product, but they do all include the SQL Server Import and Export Wizard, with access to basic data sources but nothing more, so for simple loading and extraction of data this should suffice. You will not be able to build packages though, this is just a one shot deal aimed at using the wizard on an ad-hoc basis. To get the full power of Integration Services you need to start with Standard edition. This includes the BI Development Studio, for building your own packages, and fully functional IDE integrated into Visual Studio. (You get the full VS 2005/2008 IDE with the product). All core functions will be available but with a restricted set of transformations and tasks. The SQL Server 2005 Features Comparison or Features Supported by the Editions of SQL Server 2008 describes standard edition as having basic transforms, compared to Enterprise which includes the advanced transforms. I think basic is a little harsh considering the power you get with Standard, but the advanced covers the truly ground-breaking capabilities of data mining, text mining and cleansing or fuzzy transforms. The power of performing these operations within your ETL pipeline should not be underestimated, but not all processes will require these capabilities, so it seems like a reasonable delineation. Thankfully there are no feature limitations or artificial governors within Standard compared to Enterprise. The same control flow and data flow engines underpin both editions, with the same configuration and deployment options allowing you to work seamlessly between environments and editions if using the common components. In fact there are no govenors at all in SSIS, so whilst the SQL Database engine is limited to 4 CPUs in Standard edition, SSIS is only limited by the base operating system. The advanced transforms only available with Enterprise edition: Data Mining Training Destination Data Mining Query Component Fuzzy Grouping Fuzzy Lookup Term Extraction Term Lookup Dimension Processing Destination Partition Processing Destination The advanced tasks only available with Enterprise edition: Data Mining Query Task So in summary, if you want SQL Server Integration Services, you need SQL Server Standard edition, and for the more advanced tasks and transforms you need SQL Server Enterprise edition. To recap, the answer to the often asked question is no, SQL Server Integration Services is not available in SQL Server Express or Workgroup editions.

    Read the article

  • More information on the Patch Tuesday updates for SQL Server

    - by AaronBertrand
    Last week, Microsoft released a series of patches for all supported versions of SQL Server (from SQL Server 2005 SP3 all the way to SQL Server 2008 R2). The reason for the patch against SQL Server installations is largely a client-side issue with the XML viewer application, and for SQL Server specifically, the exploit is limited to potential information disclosure. A very easy way to avoid exposure to this exploit is simply to never open a file with the .disco extension (these files are likely already...(read more)

    Read the article

  • Explaining interfaces to beginning programmers?

    - by cbmeeks
    I've had discussions with other programmers on interfaces (C#). I tried to use the analogy of interfaces being like a contract between programmers. Meaning that when you design to an interface, you are designing to a "thought out plan". This didn't fly. The other programmers (limited experience) couldn't get the concept. Or worse, refused to participate. How do you explain to people like that there are reasons to use interfaces? Thanks

    Read the article

  • An XEvent a Day (12 of 31) – Using the Extended Events SSMS Addin

    - by Jonathan Kehayias
    The lack of SSMS support for Extended Events, coupled with the fact that a number of the existing Events in SQL Trace were not implemented in SQL Server 2008, has no doubt been a key factor in its slow adoption rate. Since the release of SQL Server Denali CTP1, I have already seen a number of blog posts that talk about the introduction of Extended Events in SQL Server, because there is now a stub for it inside of SSMS. Don’t get excited yet, the functionality in CTP1 is very limited at this point,...(read more)

    Read the article

  • Returning Identity Value in SQL Server: @@IDENTITY Vs SCOPE_IDENTITY Vs IDENT_CURRENT

    - by Arefin Ali
    We have some common misconceptions on returning the last inserted identity value from tables. To return the last inserted identity value we have options to use @@IDENTITY or SCOPE_IDENTITY or IDENT_CURRENT function depending on the requirement but it will be a real mess if anybody uses anyone of these functions without knowing exact purpose. So here I want to share my thoughts on this. @@IDENTITY, SCOPE_IDENTITY and IDENT_CURRENT are almost similar functions in terms of returning identity value. They all return values that are inserted into an identity column. Earlier in SQL Server 7 we used to use @@IDENTITY to return the last inserted identity value because those days we don’t have functions like SCOPE_IDENTITY or IDENT_CURRENT but now we have these three functions. So let’s check out which one responsible for what. IDENT_CURRENT returns the last inserted identity value in a particular table. It never depends on a connection or the scope of the insert statement. IDENT_CURRENT function takes a table name as parameter. Here is the syntax to get the last inserted identity value in a particular table using IDENT_CURRENT function. SELECT IDENT_CURRENT('Employee') Both the @@IDENTITY and SCOPE_IDENTITY return the last inserted identity value created in any table in the current session. But there is little difference between these two i.e. SCOPE_IDENTITY returns value inserted only within the current scope whereas @@IDENTITY is not limited to any particular scope. Here are the syntaxes to get the last inserted identity value using these functions SELECT @@IDENTITY SELECT SCOPE_IDENTITY() Now let’s have a look at the following example. Suppose I have two tables called Employee and EmployeeLog. CREATE TABLE Employee ( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()) ) CREATE TABLE EmployeeLog ( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()) ) I have an insert trigger defined on the table Employee which inserts a new record in the EmployeeLog whenever a record insert in the Employee table. So Suppose I insert a new record in the Employee table using following statement: INSERT INTO Employee (EmpName,EmpSal) VALUES ('Arefin','1') The trigger will be fired automatically and insert a record in EmployeeLog. Here the scope of the insert statement and the trigger are different. In this situation if I retrieve last inserted identity value using @@IDENTITY, it will simply return the identity value from the EmployeeLog because it’s not limited to a particular scope. Now if I want to get the Employee table’s identity value then I need to use SCOPE_IDENTITY in this scenario. So the moral is always use SCOPE_IDENTITY to return the identity value of a recently created record in a sql statement or stored procedure. It’s safe and ensures bug free code.

    Read the article

  • Returning Identity Value in SQL Server: @@IDENTITY Vs SCOPE_IDENTITY Vs IDENT_CURRENT

    - by Arefin Ali
    We have some common misconceptions on returning the last inserted identity value from tables. To return the last inserted identity value we have options to use @@IDENTITY or SCOPE_IDENTITY or IDENT_CURRENT function depending on the requirement but it will be a real mess if anybody uses anyone of these functions without knowing exact purpose. So here I want to share my thoughts on this. @@IDENTITY, SCOPE_IDENTITY and IDENT_CURRENT are almost similar functions in terms of returning identity value. They all return values that are inserted into an identity column. Earlier in SQL Server 7 we used to use @@IDENTITY to return the last inserted identity value because those days we don’t have functions like SCOPE_IDENTITY or IDENT_CURRENT but now we have these three functions. So let’s check out which one responsible for what. IDENT_CURRENT returns the last inserted identity value in a particular table. It never depends on a connection or the scope of the insert statement. IDENT_CURRENT function takes a table name as parameter. Here is the syntax to get the last inserted identity value in a particular table using IDENT_CURRENT function. SELECT IDENT_CURRENT('Employee') Both the @@IDENTITY and SCOPE_IDENTITY return the last inserted identity value created in any table in the current session. But there is little difference between these two i.e. SCOPE_IDENTITY returns value inserted only within the current scope whereas @@IDENTITY is not limited to any particular scope. Here are the syntaxes to get the last inserted identity value using these functions SELECT @@IDENTITYSELECT SCOPE_IDENTITY() Now let’s have a look at the following example. Suppose I have two tables called Employee and EmployeeLog. CREATE TABLE Employee( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()))CREATE TABLE EmployeeLog( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE())) I have an insert trigger defined on the table Employee which inserts a new record in the EmployeeLog whenever a record insert in the Employee table. So Suppose I insert a new record in the Employee table using following statement: INSERT INTO Employee (EmpName,EmpSal) VALUES ('Arefin','1') The trigger will be fired automatically and insert a record in EmployeeLog. Here the scope of the insert statement and the trigger are different. In this situation if I retrieve last inserted identity value using @@IDENTITY, it will simply return the identity value from the EmployeeLog because it’s not limited to a particular scope. Now if I want to get the Employee table’s identity value then I need to use SCOPE_IDENTITY in this scenario. So the moral is always use SCOPE_IDENTITY to return the identity value of a recently created record in a sql statement or stored procedure. It’s safe and ensures bug free code.

    Read the article

  • Ancillary Objects: Separate Debug ELF Files For Solaris

    - by Ali Bahrami
    We introduced a new object ELF object type in Solaris 11 Update 1 called the Ancillary Object. This posting describes them, using material originally written during their development, the PSARC arc case, and the Solaris Linker and Libraries Manual. ELF objects contain allocable sections, which are mapped into memory at runtime, and non-allocable sections, which are present in the file for use by debuggers and observability tools, but which are not mapped or used at runtime. Typically, all of these sections exist within a single object file. Ancillary objects allow them to instead go into a separate file. There are different reasons given for wanting such a feature. One can debate whether the added complexity is worth the benefit, and in most cases it is not. However, one important case stands out — customers with very large 32-bit objects who are not ready or able to make the transition to 64-bits. We have customers who build extremely large 32-bit objects. Historically, the debug sections in these objects have used the stabs format, which is limited, but relatively compact. In recent years, the industry has transitioned to the powerful but verbose DWARF standard. In some cases, the size of these debug sections is large enough to push the total object file size past the fundamental 4GB limit for 32-bit ELF object files. The best, and ultimately only, solution to overly large objects is to transition to 64-bits. However, consider environments where: Hundreds of users may be executing the code on large shared systems. (32-bits use less memory and bus bandwidth, and on sparc runs just as fast as 64-bit code otherwise). Complex finely tuned code, where the original authors may no longer be available. Critical production code, that was expensive to qualify and bring online, and which is otherwise serving its intended purpose without issue. Users in these risk adverse and/or high scale categories have good reasons to push 32-bits objects to the limit before moving on. Ancillary objects offer these users a longer runway. Design The design of ancillary objects is intended to be simple, both to help human understanding when examining elfdump output, and to lower the bar for debuggers such as dbx to support them. The primary and ancillary objects have the same set of section headers, with the same names, in the same order (i.e. each section has the same index in both files). A single added section of type SHT_SUNW_ANCILLARY is added to both objects, containing information that allows a debugger to identify and validate both files relative to each other. Given one of these files, the ancillary section allows you to identify the other. Allocable sections go in the primary object, and non-allocable ones go into the ancillary object. A small set of non-allocable objects, notably the symbol table, are copied into both objects. As noted above, most sections are only written to one of the two objects, but both objects have the same section header array. The section header in the file that does not contain the section data is tagged with the SHF_SUNW_ABSENT section header flag to indicate its placeholder status. Compiler writers and others who produce objects can set the SUNW_SHF_PRIMARY section header flag to mark non-allocable sections that should go to the primary object rather than the ancillary. If you don't request an ancillary object, the Solaris ELF format is unchanged. Users who don't use ancillary objects do not pay for the feature. This is important, because they exist to serve a small subset of our users, and must not complicate the common case. If you do request an ancillary object, the runtime behavior of the primary object will be the same as that of a normal object. There is no added runtime cost. The primary and ancillary object together represent a logical single object. This is facilitated by the use of a single set of section headers. One can easily imagine a tool that can merge a primary and ancillary object into a single file, or the reverse. (Note that although this is an interesting intellectual exercise, we don't actually supply such a tool because there's little practical benefit above and beyond using ld to create the files). Among the benefits of this approach are: There is no need for per-file symbol tables to reflect the contents of each file. The same symbol table that would be produced for a standard object can be used. The section contents are identical in either case — there is no need to alter data to accommodate multiple files. It is very easy for a debugger to adapt to these new files, and the processing involved can be encapsulated in input/output routines. Most of the existing debugger implementation applies without modification. The limit of a 4GB 32-bit output object is now raised to 4GB of code, and 4GB of debug data. There is also the future possibility (not currently supported) to support multiple ancillary objects, each of which could contain up to 4GB of additional debug data. It must be noted however that the 32-bit DWARF debug format is itself inherently 32-bit limited, as it uses 32-bit offsets between debug sections, so the ability to employ multiple ancillary object files may not turn out to be useful. Using Ancillary Objects (From the Solaris Linker and Libraries Guide) By default, objects contain both allocable and non-allocable sections. Allocable sections are the sections that contain executable code and the data needed by that code at runtime. Non-allocable sections contain supplemental information that is not required to execute an object at runtime. These sections support the operation of debuggers and other observability tools. The non-allocable sections in an object are not loaded into memory at runtime by the operating system, and so, they have no impact on memory use or other aspects of runtime performance no matter their size. For convenience, both allocable and non-allocable sections are normally maintained in the same file. However, there are situations in which it can be useful to separate these sections. To reduce the size of objects in order to improve the speed at which they can be copied across wide area networks. To support fine grained debugging of highly optimized code requires considerable debug data. In modern systems, the debugging data can easily be larger than the code it describes. The size of a 32-bit object is limited to 4 Gbytes. In very large 32-bit objects, the debug data can cause this limit to be exceeded and prevent the creation of the object. To limit the exposure of internal implementation details. Traditionally, objects have been stripped of non-allocable sections in order to address these issues. Stripping is effective, but destroys data that might be needed later. The Solaris link-editor can instead write non-allocable sections to an ancillary object. This feature is enabled with the -z ancillary command line option. $ ld ... -z ancillary[=outfile] ...By default, the ancillary file is given the same name as the primary output object, with a .anc file extension. However, a different name can be provided by providing an outfile value to the -z ancillary option. When -z ancillary is specified, the link-editor performs the following actions. All allocable sections are written to the primary object. In addition, all non-allocable sections containing one or more input sections that have the SHF_SUNW_PRIMARY section header flag set are written to the primary object. All remaining non-allocable sections are written to the ancillary object. The following non-allocable sections are written to both the primary object and ancillary object. .shstrtab The section name string table. .symtab The full non-dynamic symbol table. .symtab_shndx The symbol table extended index section associated with .symtab. .strtab The non-dynamic string table associated with .symtab. .SUNW_ancillary Contains the information required to identify the primary and ancillary objects, and to identify the object being examined. The primary object and all ancillary objects contain the same array of sections headers. Each section has the same section index in every file. Although the primary and ancillary objects all define the same section headers, the data for most sections will be written to a single file as described above. If the data for a section is not present in a given file, the SHF_SUNW_ABSENT section header flag is set, and the sh_size field is 0. This organization makes it possible to acquire a full list of section headers, a complete symbol table, and a complete list of the primary and ancillary objects from either of the primary or ancillary objects. The following example illustrates the underlying implementation of ancillary objects. An ancillary object is created by adding the -z ancillary command line option to an otherwise normal compilation. The file utility shows that the result is an executable named a.out, and an associated ancillary object named a.out.anc. $ cat hello.c #include <stdio.h> int main(int argc, char **argv) { (void) printf("hello, world\n"); return (0); } $ cc -g -zancillary hello.c $ file a.out a.out.anc a.out: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, ancillary object a.out.anc a.out.anc: ELF 32-bit LSB ancillary 80386 Version 1, primary object a.out $ ./a.out hello worldThe resulting primary object is an ordinary executable that can be executed in the usual manner. It is no different at runtime than an executable built without the use of ancillary objects, and then stripped of non-allocable content using the strip or mcs commands. As previously described, the primary object and ancillary objects contain the same section headers. To see how this works, it is helpful to use the elfdump utility to display these section headers and compare them. The following table shows the section header information for a selection of headers from the previous link-edit example. Index Section Name Type Primary Flags Ancillary Flags Primary Size Ancillary Size 13 .text PROGBITS ALLOC EXECINSTR ALLOC EXECINSTR SUNW_ABSENT 0x131 0 20 .data PROGBITS WRITE ALLOC WRITE ALLOC SUNW_ABSENT 0x4c 0 21 .symtab SYMTAB 0 0 0x450 0x450 22 .strtab STRTAB STRINGS STRINGS 0x1ad 0x1ad 24 .debug_info PROGBITS SUNW_ABSENT 0 0 0x1a7 28 .shstrtab STRTAB STRINGS STRINGS 0x118 0x118 29 .SUNW_ancillary SUNW_ancillary 0 0 0x30 0x30 The data for most sections is only present in one of the two files, and absent from the other file. The SHF_SUNW_ABSENT section header flag is set when the data is absent. The data for allocable sections needed at runtime are found in the primary object. The data for non-allocable sections used for debugging but not needed at runtime are placed in the ancillary file. A small set of non-allocable sections are fully present in both files. These are the .SUNW_ancillary section used to relate the primary and ancillary objects together, the section name string table .shstrtab, as well as the symbol table.symtab, and its associated string table .strtab. It is possible to strip the symbol table from the primary object. A debugger that encounters an object without a symbol table can use the .SUNW_ancillary section to locate the ancillary object, and access the symbol contained within. The primary object, and all associated ancillary objects, contain a .SUNW_ancillary section that allows all the objects to be identified and related together. $ elfdump -T SUNW_ancillary a.out a.out.anc a.out: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0x8724 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 a.out.anc: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0xfbe2 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 The ancillary sections for both objects contain the same number of elements, and are identical except for the first element. Each object, starting with the primary object, is introduced with a MEMBER element that gives the file name, followed by a CHECKSUM that identifies the object. In this example, the primary object is a.out, and has a checksum of 0x8724. The ancillary object is a.out.anc, and has a checksum of 0xfbe2. The first element in a .SUNW_ancillary section, preceding the MEMBER element for the primary object, is always a CHECKSUM element, containing the checksum for the file being examined. The presence of a .SUNW_ancillary section in an object indicates that the object has associated ancillary objects. The names of the primary and all associated ancillary objects can be obtained from the ancillary section from any one of the files. It is possible to determine which file is being examined from the larger set of files by comparing the first checksum value to the checksum of each member that follows. Debugger Access and Use of Ancillary Objects Debuggers and other observability tools must merge the information found in the primary and ancillary object files in order to build a complete view of the object. This is equivalent to processing the information from a single file. This merging is simplified by the primary object and ancillary objects containing the same section headers, and a single symbol table. The following steps can be used by a debugger to assemble the information contained in these files. Starting with the primary object, or any of the ancillary objects, locate the .SUNW_ancillary section. The presence of this section identifies the object as part of an ancillary group, contains information that can be used to obtain a complete list of the files and determine which of those files is the one currently being examined. Create a section header array in memory, using the section header array from the object being examined as an initial template. Open and read each file identified by the .SUNW_ancillary section in turn. For each file, fill in the in-memory section header array with the information for each section that does not have the SHF_SUNW_ABSENT flag set. The result will be a complete in-memory copy of the section headers with pointers to the data for all sections. Once this information has been acquired, the debugger can proceed as it would in the single file case, to access and control the running program. Note - The ELF definition of ancillary objects provides for a single primary object, and an arbitrary number of ancillary objects. At this time, the Oracle Solaris link-editor only produces a single ancillary object containing all non-allocable sections. This may change in the future. Debuggers and other observability tools should be written to handle the general case of multiple ancillary objects. ELF Implementation Details (From the Solaris Linker and Libraries Guide) To implement ancillary objects, it was necessary to extend the ELF format to add a new object type (ET_SUNW_ANCILLARY), a new section type (SHT_SUNW_ANCILLARY), and 2 new section header flags (SHF_SUNW_ABSENT, SHF_SUNW_PRIMARY). In this section, I will detail these changes, in the form of diffs to the Solaris Linker and Libraries manual. Part IV ELF Application Binary Interface Chapter 13: Object File Format Object File Format Edit Note: This existing section at the beginning of the chapter describes the ELF header. There's a table of object file types, which now includes the new ET_SUNW_ANCILLARY type. e_type Identifies the object file type, as listed in the following table. NameValueMeaning ET_NONE0No file type ET_REL1Relocatable file ET_EXEC2Executable file ET_DYN3Shared object file ET_CORE4Core file ET_LOSUNW0xfefeStart operating system specific range ET_SUNW_ANCILLARY0xfefeAncillary object file ET_HISUNW0xfefdEnd operating system specific range ET_LOPROC0xff00Start processor-specific range ET_HIPROC0xffffEnd processor-specific range Sections Edit Note: This overview section defines the section header structure, and provides a high level description of known sections. It was updated to define the new SHF_SUNW_ABSENT and SHF_SUNW_PRIMARY flags and the new SHT_SUNW_ANCILLARY section. ... sh_type Categorizes the section's contents and semantics. Section types and their descriptions are listed in Table 13-5. sh_flags Sections support 1-bit flags that describe miscellaneous attributes. Flag definitions are listed in Table 13-8. ... Table 13-5 ELF Section Types, sh_type NameValue . . . SHT_LOSUNW0x6fffffee SHT_SUNW_ancillary0x6fffffee . . . ... SHT_LOSUNW - SHT_HISUNW Values in this inclusive range are reserved for Oracle Solaris OS semantics. SHT_SUNW_ANCILLARY Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section. ... Table 13-8 ELF Section Attribute Flags NameValue . . . SHF_MASKOS0x0ff00000 SHF_SUNW_NODISCARD0x00100000 SHF_SUNW_ABSENT0x00200000 SHF_SUNW_PRIMARY0x00400000 SHF_MASKPROC0xf0000000 . . . ... SHF_SUNW_ABSENT Indicates that the data for this section is not present in this file. When ancillary objects are created, the primary object and any ancillary objects, will all have the same section header array, to facilitate merging them to form a complete view of the object, and to allow them to use the same symbol tables. Each file contains a subset of the section data. The data for allocable sections is written to the primary object while the data for non-allocable sections is written to an ancillary file. The SHF_SUNW_ABSENT flag is used to indicate that the data for the section is not present in the object being examined. When the SHF_SUNW_ABSENT flag is set, the sh_size field of the section header must be 0. An application encountering an SHF_SUNW_ABSENT section can choose to ignore the section, or to search for the section data within one of the related ancillary files. SHF_SUNW_PRIMARY The default behavior when ancillary objects are created is to write all allocable sections to the primary object and all non-allocable sections to the ancillary objects. The SHF_SUNW_PRIMARY flag overrides this behavior. Any output section containing one more input section with the SHF_SUNW_PRIMARY flag set is written to the primary object without regard for its allocable status. ... Two members in the section header, sh_link, and sh_info, hold special information, depending on section type. Table 13-9 ELF sh_link and sh_info Interpretation sh_typesh_linksh_info . . . SHT_SUNW_ANCILLARY The section header index of the associated string table. 0 . . . Special Sections Edit Note: This section describes the sections used in Solaris ELF objects, using the types defined in the previous description of section types. It was updated to define the new .SUNW_ancillary (SHT_SUNW_ANCILLARY) section. Various sections hold program and control information. Sections in the following table are used by the system and have the indicated types and attributes. Table 13-10 ELF Special Sections NameTypeAttribute . . . .SUNW_ancillarySHT_SUNW_ancillaryNone . . . ... .SUNW_ancillary Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section for details. ... Ancillary Section Edit Note: This new section provides the format reference describing the layout of a .SUNW_ancillary section and the meaning of the various tags. Note that these sections use the same tag/value concept used for dynamic and capabilities sections, and will be familiar to anyone used to working with ELF. In addition to the primary output object, the Solaris link-editor can produce one or more ancillary objects. Ancillary objects contain non-allocable sections that would normally be written to the primary object. When ancillary objects are produced, the primary object and all of the associated ancillary objects contain a SHT_SUNW_ancillary section, containing information that identifies these related objects. Given any one object from such a group, the ancillary section provides the information needed to identify and interpret the others. This section contains an array of the following structures. See sys/elf.h. typedef struct { Elf32_Word a_tag; union { Elf32_Word a_val; Elf32_Addr a_ptr; } a_un; } Elf32_Ancillary; typedef struct { Elf64_Xword a_tag; union { Elf64_Xword a_val; Elf64_Addr a_ptr; } a_un; } Elf64_Ancillary; For each object with this type, a_tag controls the interpretation of a_un. a_val These objects represent integer values with various interpretations. a_ptr These objects represent file offsets or addresses. The following ancillary tags exist. Table 13-NEW1 ELF Ancillary Array Tags NameValuea_un ANC_SUNW_NULL0Ignored ANC_SUNW_CHECKSUM1a_val ANC_SUNW_MEMBER2a_ptr ANC_SUNW_NULL Marks the end of the ancillary section. ANC_SUNW_CHECKSUM Provides the checksum for a file in the c_val element. When ANC_SUNW_CHECKSUM precedes the first instance of ANC_SUNW_MEMBER, it provides the checksum for the object from which the ancillary section is being read. When it follows an ANC_SUNW_MEMBER tag, it provides the checksum for that member. ANC_SUNW_MEMBER Specifies an object name. The a_ptr element contains the string table offset of a null-terminated string, that provides the file name. An ancillary section must always contain an ANC_SUNW_CHECKSUM before the first instance of ANC_SUNW_MEMBER, identifying the current object. Following that, there should be an ANC_SUNW_MEMBER for each object that makes up the complete set of objects. Each ANC_SUNW_MEMBER should be followed by an ANC_SUNW_CHECKSUM for that object. A typical ancillary section will therefore be structured as: TagMeaning ANC_SUNW_CHECKSUMChecksum of this object ANC_SUNW_MEMBERName of object #1 ANC_SUNW_CHECKSUMChecksum for object #1 . . . ANC_SUNW_MEMBERName of object N ANC_SUNW_CHECKSUMChecksum for object N ANC_SUNW_NULL An object can therefore identify itself by comparing the initial ANC_SUNW_CHECKSUM to each of the ones that follow, until it finds a match. Related Other Work The GNU developers have also encountered the need/desire to support separate debug information files, and use the solution detailed at http://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html. At the current time, the separate debug file is constructed by building the standard object first, and then copying the debug data out of it in a separate post processing step, Hence, it is limited to a total of 4GB of code and debug data, just as a single object file would be. They are aware of this, and I have seen online comments indicating that they may add direct support for generating these separate files to their link-editor. It is worth noting that the GNU objcopy utility is available on Solaris, and that the Studio dbx debugger is able to use these GNU style separate debug files even on Solaris. Although this is interesting in terms giving Linux users a familiar environment on Solaris, the 4GB limit means it is not an answer to the problem of very large 32-bit objects. We have also encountered issues with objcopy not understanding Solaris-specific ELF sections, when using this approach. The GNU community also has a current effort to adapt their DWARF debug sections in order to move them to separate files before passing the relocatable objects to the linker. The details of Project Fission can be found at http://gcc.gnu.org/wiki/DebugFission. The goal of this project appears to be to reduce the amount of data seen by the link-editor. The primary effort revolves around moving DWARF data to separate .dwo files so that the link-editor never encounters them. The details of modifying the DWARF data to be usable in this form are involved — please see the above URL for details.

    Read the article

  • 5 Best WordPress Themes for creating Reviews & Ratings Websites

    - by Aditi
    WordPress CMS is so powerful, we have seen variety of websites being made with WordPress. It is not limited to just blogging. You can build robust community driven websites as well. Recently I cam across these themes, as I was trying to build such a similar reviews, ratings community website. I have reviewed all of [...] Related posts:10+ Best Fashion WordPress Themes 21+ WordPress Photo Blog & Portfolio Themes 12 Best WordPress Themes for Church

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >