Search Results

Search found 29235 results on 1170 pages for 'dynamic management objects'.

Page 143/1170 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • Different fan behaviour in my laptop after upgrade, what to do now?

    - by student
    After upgrading from lubuntu 13.10 to 14.04 the fan of my laptop seems to run much more often than in 13.10. When it runs, it doesn't run continously but starts and stops every second. fwts fan results in Results generated by fwts: Version V14.03.01 (2014-03-27 02:14:17). Some of this work - Copyright (c) 1999 - 2014, Intel Corp. All rights reserved. Some of this work - Copyright (c) 2010 - 2014, Canonical. This test run on 12/05/14 at 21:40:13 on host Linux einstein 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64. Command: "fwts fan". Running tests: fan. fan: Simple fan tests. -------------------------------------------------------------------------------- Test 1 of 2: Test fan status. Test how many fans there are in the system. Check for the current status of the fan(s). PASSED: Test 1, Fan cooling_device0 of type Processor has max cooling state 10 and current cooling state 0. PASSED: Test 1, Fan cooling_device1 of type Processor has max cooling state 10 and current cooling state 0. PASSED: Test 1, Fan cooling_device2 of type LCD has max cooling state 15 and current cooling state 10. Test 2 of 2: Load system, check CPU fan status. Test how many fans there are in the system. Check for the current status of the fan(s). Loading CPUs for 20 seconds to try and get fan speeds to change. Fan cooling_device0 current state did not change from value 0 while CPUs were busy. Fan cooling_device1 current state did not change from value 0 while CPUs were busy. ADVICE: Did not detect any change in the CPU related thermal cooling device states. It could be that the devices are returning static information back to the driver and/or the fan speed is automatically being controlled by firmware using System Management Mode in which case the kernel interfaces being examined may not work anyway. ================================================================================ 3 passed, 0 failed, 0 warning, 0 aborted, 0 skipped, 0 info only. ================================================================================ 3 passed, 0 failed, 0 warning, 0 aborted, 0 skipped, 0 info only. Test Failure Summary ================================================================================ Critical failures: NONE High failures: NONE Medium failures: NONE Low failures: NONE Other failures: NONE Test |Pass |Fail |Abort|Warn |Skip |Info | ---------------+-----+-----+-----+-----+-----+-----+ fan | 3| | | | | | ---------------+-----+-----+-----+-----+-----+-----+ Total: | 3| 0| 0| 0| 0| 0| ---------------+-----+-----+-----+-----+-----+-----+ Here is the output of lsmod lsmod Module Size Used by i8k 14421 0 zram 18478 2 dm_crypt 23177 0 gpio_ich 13476 0 dell_wmi 12761 0 sparse_keymap 13948 1 dell_wmi snd_hda_codec_hdmi 46207 1 snd_hda_codec_idt 54645 1 rfcomm 69160 0 arc4 12608 2 dell_laptop 18168 0 bnep 19624 2 dcdbas 14928 1 dell_laptop bluetooth 395423 10 bnep,rfcomm iwldvm 232285 0 mac80211 626511 1 iwldvm snd_hda_intel 52355 3 snd_hda_codec 192906 3 snd_hda_codec_hdmi,snd_hda_codec_idt,snd_hda_intel snd_hwdep 13602 1 snd_hda_codec snd_pcm 102099 3 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel snd_page_alloc 18710 2 snd_pcm,snd_hda_intel snd_seq_midi 13324 0 snd_seq_midi_event 14899 1 snd_seq_midi snd_rawmidi 30144 1 snd_seq_midi coretemp 13435 0 kvm_intel 143060 0 kvm 451511 1 kvm_intel snd_seq 61560 2 snd_seq_midi_event,snd_seq_midi joydev 17381 0 serio_raw 13462 0 iwlwifi 169932 1 iwldvm pcmcia 62299 0 snd_seq_device 14497 3 snd_seq,snd_rawmidi,snd_seq_midi snd_timer 29482 2 snd_pcm,snd_seq lpc_ich 21080 0 cfg80211 484040 3 iwlwifi,mac80211,iwldvm yenta_socket 41027 0 pcmcia_rsrc 18407 1 yenta_socket pcmcia_core 23592 3 pcmcia,pcmcia_rsrc,yenta_socket binfmt_misc 17468 1 snd 69238 17 snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_hda_codec_idt,snd_pcm,snd_seq,snd_rawmidi,snd_hda_codec,snd_hda_intel,snd_seq_device,snd_seq_midi soundcore 12680 1 snd parport_pc 32701 0 mac_hid 13205 0 ppdev 17671 0 lp 17759 0 parport 42348 3 lp,ppdev,parport_pc firewire_ohci 40409 0 psmouse 102222 0 sdhci_pci 23172 0 sdhci 43015 1 sdhci_pci firewire_core 68769 1 firewire_ohci crc_itu_t 12707 1 firewire_core ahci 25819 2 libahci 32168 1 ahci i915 783485 2 wmi 19177 1 dell_wmi i2c_algo_bit 13413 1 i915 drm_kms_helper 52758 1 i915 e1000e 254433 0 drm 302817 3 i915,drm_kms_helper ptp 18933 1 e1000e pps_core 19382 1 ptp video 19476 1 i915 I tried one answer to the similar question: loud fan on Ubuntu 14.04 and created a /etc/i8kmon.conf like the following: # Run as daemon, override with --daemon option set config(daemon) 1 # Automatic fan control, override with --auto option set config(auto) 1 # Status check timeout (seconds), override with --timeout option set config(timeout) 2 # Report status on stdout, override with --verbose option set config(verbose) 1 # Temperature thresholds: {fan_speeds low_ac high_ac low_batt high_batt} set config(0) {{0 0} -1 55 -1 55} set config(1) {{0 1} 50 60 55 65} set config(2) {{1 1} 55 80 60 85} set config(3) {{2 2} 70 128 75 128} With this setup the fan goes on even if the temperature is below 50 degree celsius (I don't see a pattern). However I get the impression that the CPU got's hotter in average than without this file. What changes from 13.10 to 14.04 may be responsible for this? If this is a bug, for which package I should report the bug?

    Read the article

  • Correlating traditional Windows joystick axes with HID

    - by Wade Williams
    I'm a bit confused on the description of joystick axes and I'm hoping that someone has a link or document which could help clear my confusion. I'm not a Windows guy, so trying to port some traditional Windows gameport code has me a bit confused. We all know about the common first three axes: X Y Z My understanding was that in the gameport-style interface the three other axes are: R U V However, looking in my IOHIDUsageTables (OS X), I see: kHIDUsage_GD_X = 0x30, /* Dynamic Value */ kHIDUsage_GD_Y = 0x31, /* Dynamic Value */ kHIDUsage_GD_Z = 0x32, /* Dynamic Value */ kHIDUsage_GD_Rx = 0x33, /* Dynamic Value */ kHIDUsage_GD_Ry = 0x34, /* Dynamic Value */ kHIDUsage_GD_Rz = 0x35, /* Dynamic Value */ kHIDUsage_GD_Vx = 0x40, /* Dynamic Value */ kHIDUsage_GD_Vy = 0x41, /* Dynamic Value */ kHIDUsage_GD_Vz = 0x42, /* Dynamic Value */ kHIDUsage_GD_Vbrx = 0x43, /* Dynamic Value */ kHIDUsage_GD_Vbry = 0x44, /* Dynamic Value */ kHIDUsage_GD_Vbrz = 0x45, /* Dynamic Value */ kHIDUsage_GD_Vno = 0x46, /* Dynamic Value */ This has me a bit confused due to the three R axis (though that does not appear to be uncommon) and the lack of a U axis. Two questions: 1) Can anyone confirm to what axis the traditional U axis would be? I saw one document describe it as "the axis for rudder pedals" leading me to believe it would be Rz. 2) Can anyone describe in more detail the typical usages of the V and Vbr axes? I understand the descriptions are "vector" and "relative vector,' respectively, but I'm having difficult visualizing what that means in terms of a physical device. All enlightenment and documentation pointers welcome.

    Read the article

  • Oracle on Oracle: Is that all?

    - by Darin Pendergraft
    On October 17th, I posted a short blog and a podcast interview with Chirag Andani, talking about how Oracle IT uses its own IDM products. Blog link here. In response, I received a comment from reader Jaime Cardoso ([email protected]) who posted: “- You could have talked about how by deploying Oracle's Open standards base technology you were able to integrate any new system in your infrastructure in days. - You could have talked about how by deploying federation you were enabling the business side to keep all their options open in terms of companies to buy and sell while maintaining perfect employee and customer's single view. - You could have talked about how you are now able to cut response times to your audit and security teams into 1/10th of your former times Instead you spent 6 minutes talking about single sign on and self provisioning? If I didn't knew your IDM offer so well I would now be wondering what its differences from Microsoft's offer was. Sorry for not giving a positive comment here but, please your IDM suite is very good and, you simply aren't promoting it well enough” So I decided to send Jaime a note asking him about his experience, and to get his perspective on what makes the Oracle products great. What I found out is that Jaime is a very experienced IDM Architect with several major projects under his belt. Darin Pendergraft: Can you tell me a bit about your experience? How long have you worked in IT, and what is your IDM experience? Jaime Cardoso: I started working in "serious" IT in 1998 when I became Netscape's technical specialist in Portugal. Netscape Portugal didn't exist so, I was working for their VAR here. Most of my work at the time was with Netscape's mail server and LDAP server. Since that time I've been bouncing between the system's side like Sun resellers, Solaris stuff and even worked with Sun's Engineering in the making of an Hierarchical Storage Product (Sun CIS if you know it) and the application's side, mostly in LDAP and IDM. Over the years I've been doing support, service delivery and pre-sales / architecture design of IDM solutions in most big customers in Portugal, to name a few projects: - The first European deployment of Sun Access Manager (SAPO – Portugal Telecom) - The identity repository of 5/5 of the Biggest Portuguese banks - The Portuguese government federation of services project DP: OK, in your blog response, you mentioned 3 topics: 1. Using Oracle's standards based architecture; (you) were able to integrate any new system in days: can you give an example? What systems, how long did it take, number of apps/users/accounts/roles etc. JC: It's relatively easy to design a user management strategy for a static environment, or if you simply assume that you're an <insert vendor here> shop and all your systems will bow to that vendor's will. We've all seen that path, the use of proprietary technologies in interoperability solutions but, then reality kicks in. As an ISP I recall that I made the technical decision to use Active Directory as a central authentication system for the entire IT infrastructure. Clients, systems, apps, everything was there. As a good part of the systems and apps were running on UNIX, then a connector became needed in order to have UNIX boxes to authenticate against AD. And, that strategy worked but, each new machine required the component to be installed, monitoring had to be made for that component and each new app had to be independently certified. A self care user portal was an ongoing project, AD access assumes the client is inside the domain, something the ISP's customers (and UNIX boxes) weren't nor had any intention of ever being. When the Windows 2008 rollout was done, Microsoft changed the Active Directory interface. The Windows administrators didn't have enough know-how about directories and the way systems outside the MS world behaved so, on the go live, things weren't properly tested and a general outage followed. Several hours and 1 roll back later, everything was back working. But, the ISP still had to change all of its applications to work with the new access methods and reset the effort spent on the self service user portal. To keep with the same strategy, they would also have to trust Microsoft not to change interfaces again. Simply by putting up an Oracle LDAP server in the middle and replicating the user info from the AD into LDAP, most of the problems went away. Even systems for which no AD connector existed had PAM in them so, integration was made at the OS level, fully supported by the OS supplier. Sun Identity Manager already had a self care portal, combined with a user workflow so, all the clearances had to be given before the account was created or updated. Adding a new system as a client for these authentication services was simply a new checkbox in the OS installer and, even True64 systems were, for the first time integrated also with a 5 minute work of a junior system admin. True, all the windows clients and MS apps still went to the AD for their authentication needs so, from the start everybody knew that they weren't 100% free of migration pains but, now they had a single point of problems to look at. If you're looking for numbers: - 500K directory entries (users) - 2-300 systems After the initial setup, I personally integrated about 20 systems / apps against LDAP in 1 day while being watched by the different IT teams. The internal IT staff did the rest. DP: 2. Using Federation allows the business to keep options open for buying and selling companies, and yet maintain a single view for both employee and customer. What do you mean by this? Can you give an example? JC: The market is dynamic. The company that's being bought today tomorrow will be sold again. Companies that spread on different markets may see the regulator forcing a sale of part of a company due to monopoly reasons and companies that are in multiple countries have to comply with different legislations. Our job, as IT architects, while addressing the customers and employees authentication services, is quite hard and, quite contrary. On one hand, we need to give access to all of our employees to the relevant systems, apps and resources and, we already have marketing talking with us trying to find out who's a customer of the bough company but not from ours to address. On the other hand, we have to do that and keep in mind we may have to break up all that effort and that different countries legislation may became a problem with a full integration plan. That's a job for user Federation. you don't want to be the one who's telling your President that he will sell that business unit without it's customer's database (making the deal worth a lot less) or that the buyer will take with him a copy of your entire customer's database. Federation enables you to start controlling permissions to users outside of your traditional authentication realm. So what if the people of that company you just bought are keeping their old logins? Do you want, because of that, to have a dedicated system for their expenses reports? And do you want to keep their sales (and pre-sales) people out of the loop in terms of your group's path? Control the information flow, establish a Federation trust circle and give access to your apps to users that haven't (yet?) been brought into your internal login systems. You can still see your users in a unified view, you obviously control if a user has access to any particular application, either that user is in your local database or stored in a directory on the other side of the world. DP: 3. Cut response times of audit and security teams to 1/10. Is this a real number? Can you give an example? JC: No, I don't have any backing for this number. One of the companies I did system Administration for has a SOX compliance policy in place (I remind you that I live in Portugal so, this definition of SOX may be somewhat different from what you're used to) and, every time the audit team says they'll do another audit, we have to negotiate with them the size of the sample and we spend about 15 man/days gathering all the required info they ask. I did some work with Sun's Identity auditor and, from what I've been seeing, Oracle's product is even better and, I've seen that most of the information they ask would have been provided in a few hours with the help of this tool. I do stand by what I said here but, to be honest, someone from Identity Auditor team would do a much better job than me explaining this time savings. Jaime is right: the Oracle IDM products have a lot of business value, and Oracle IT is using them for a lot more than I was able to cover in the short podcast that I posted. I want to thank Jaime for his comments and perspective. We want these blog posts to be informative and honest – so if you have feedback for the Oracle IDM team on any topic discussed here, please post your comments below.

    Read the article

  • Conversation as User Assistance

    - by ultan o'broin
    Applications User Experience members (Erika Web, Laurie Pattison, and I) attended the User Assistance Europe Conference in Stockholm, Sweden. We were impressed with the thought leadership and practical application of ideas in Anne Gentle's keynote address "Social Web Strategies for Documentation". After the conference, we spoke with Anne to explore the ideas further. Anne Gentle (left) with Applications User Experience Senior Director Laurie Pattison In Anne's book called Conversation and Community: The Social Web for Documentation, she explains how user assistance is undergoing a seismic shift. The direction is away from the old print manuals and online help concept towards a web-based, user community-driven solution using social media tools. User experience professionals now have a vast range of such tools to start and nurture this "conversation": blogs, wikis, forums, social networking sites, microblogging systems, image and video sharing sites, virtual worlds, podcasts, instant messaging, mashups, and so on. That user communities are a rich source of user assistance is not a surprise, but the extent of available assistance is. For example, we know from the Consortium for Service Innovation that there has been an 'explosion' of user-generated content on the web. User-initiated community conversations provide as much as 30 times the number of official help desk solutions for consortium members! The growing reliance on user community solutions is clearly a user experience issue. Anne says that user assistance as conversation "means getting closer to users and helping them perform well. User-centered design has been touted as one of the most important ideas developed in the last 20 years of workplace writing. Now writers can take the idea of user-centered design a step further by starting conversations with users and enabling user assistance in interactions." Some of Anne's favorite examples of this paradigm shift from the world of traditional documentation to community conversation include: Writer Bob Bringhurst's blog about Adobe InDesign and InCopy products and Adobe's community help The Microsoft Development Network Community Center ·The former Sun (now Oracle) OpenDS wiki, NetBeans Ruby and other community approaches to engage diverse audiences using screencasts, wikis, and blogs. Cisco's customer support wiki, EMC's community, as well as Symantec and Intuit's approaches The efforts of Ubuntu, Mozilla, and the FLOSS community generally Adobe Writer Bob Bringhurst's Blog Oracle is not without a user community conversation too. Besides the community discussions and blogs around documentation offerings, we have the My Oracle Support Community forums, Oracle Technology Network (OTN) communities, wiki, blogs, and so on. We have the great work done by our user groups and customer councils. Employees like David Haimes reach out, and enthusiastic non-employee gurus like Chet Justice (OracleNerd), Floyd Teter and Eddie Awad provide great "how-to" information too. But what does this paradigm shift mean for existing technical writers as users turn away from the traditional printable PDF manual deliverables? We asked Anne after the conference. The writer role becomes one of conversation initiator or enabler. The role evolves, along with the process, as the users define their concept of user assistance and terms of engagement with the product instead of having it pre-determined. It is largely a case now of "inventing the job while you're doing it, instead of being hired for it" Anne said. There is less emphasis on formal titles. Anne mentions that her own title "Content Stacker" at OpenStack; others use titles such as "Content Curator" or "Community Lead". However, the role remains one essentially about communications, "but of a new type--interacting with users, moderating, curating content, instead of sitting down to write a manual from start to finish." Clearly then, this role is open to more than professional technical writers. Product managers who write blogs, developers who moderate forums, support professionals who update wikis, rock star programmers with a penchant for YouTube are ideal. Anyone with the product knowledge, empathy for the user, and flair for relationships on the social web can join in. Some even perform these roles already but do not realize it. Anne feels the technical communicator space will move from hiring new community conversation professionals (who are already active in the space through blogging, tweets, wikis, and so on) to retraining some existing writers over time. Our own research reveals that the established proponents of community user assistance even set employee performance objectives for internal content curators about the amount of community content delivered by people outside the organization! To take advantage of the conversations on the web as user assistance, enterprises must first establish where on the spectrum their community lies. "What is the line between community willingness to contribute and the enterprise objectives?" Anne asked. "The relationship with users must be managed and also measured." Anne believes that the process can start with a "just do it" approach. Begin by reaching out to existing user groups, individual bloggers and tweeters, forum posters, early adopter program participants, conference attendees, customer advisory board members, and so on. Use analytical tools to measure the level of conversation about your products and services to show a return on investment (ROI), winning management support. Anne emphasized that success with the community model is dependent on lowering the technical and motivational barriers so that users can readily contribute to the conversation. Simple tools must be provided, and guidelines, if any, must be straightforward but not mandatory. The conversational approach is one where traditional style and branding guides do not necessarily apply. Tools and infrastructure help users to create content easily, to search and find the information online, read it, rate it, translate it, and participate further in the content's evolution. Recognizing contributors by using ratings on forums, giving out Twitter kudos, conference invitations, visits to headquarters, free products, preview releases, and so on, also encourages the adoption of the conversation model. The move to conversation as user assistance is not free, but there is a business ROI. The conversational model means that customer service is enhanced, as user experience moves from a functional to a valued, emotional level. Studies show a positive correlation between loyalty and financial performance (Consortium for Service Innovation, 2010), and as customer experience and loyalty become key differentiators, user experience professionals cannot explore the model's possibilities. The digital universe (measured at 1.2 million petabytes in 2010) is doubling every 12 to 18 months, and 70 percent of that universe consists of user-generated content (IDC, 2010). Conversation as user assistance cannot be ignored but must be embraced. It is a time to manage for abundance, not scarcity. Besides, the conversation approach certainly sounds more interesting, rewarding, and fun than the traditional model! I would like to thank Anne for her time and thoughts, and recommend that all user assistance professionals read her book. You can follow Anne on Twitter at: http://www.twitter.com/annegentle. Oracle's Acrolinx IQ deployment was used to author this article.

    Read the article

  • SQL SERVER – SSMS Automatically Generates TOP (100) PERCENT in Query Designer

    - by pinaldave
    Earlier this week, I was surfing various SQL forums to see what kind of help developer need in the SQL Server world. One of the question indeed caught my attention. I am here regenerating complete question as well scenario to illustrate the point in a precise manner. Additionally, I have added added second part of the question to give completeness. Question: I am trying to create a view in Query Designer (not in the New Query Window). Every time I am trying to create a view it always adds  TOP (100) PERCENT automatically on the T-SQL script. No matter what I do, it always automatically adds the TOP (100) PERCENT to the script. I have attempted to copy paste from notepad, build a query and a few other things – there is no success. I am really not sure what I am doing wrong with Query Designer. Here is my query script: (I use AdventureWorks as a sample database) SELECT Person.Address.AddressID FROM Person.Address INNER JOIN Person.AddressType ON Person.Address.AddressID = Person.AddressType.AddressTypeID ORDER BY Person.Address.AddressID This script automatically replaces by following query: SELECT TOP (100) PERCENT Person.Address.AddressID FROM Person.Address INNER JOIN Person.AddressType ON Person.Address.AddressID = Person.AddressType.AddressTypeID ORDER BY Person.Address.AddressID However, when I try to do the same from New Query Window it works totally fine. However, when I attempt to create a view of the same query it gives following error. Msg 1033, Level 15, State 1, Procedure myView, Line 6 The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP, OFFSET or FOR XML is also specified. It is pretty clear to me now that the script which I have written seems to need TOP (100) PERCENT, so Query . Why do I need it? Is there any work around to this issue. I particularly find this question pretty interesting as it really touches the fundamentals of the T-SQL query writing. Please note that the query which is automatically changed is not in New Query Editor but opened from SSMS using following way. Database >> Views >> Right Click >> New View (see the image below) Answer: The answer to the above question can be very long but I will keep it simple and to the point. There are three things to discuss in above script 1) Reason for Error 2) Reason for Auto generates TOP (100) PERCENT and 3) Potential solutions to the above error. Let us quickly see them in detail. 1) Reason for Error The reason for error is already given in the error. ORDER BY is invalid in the views and a few other objects. One has to use TOP or other keywords along with it. The way semantics of the query works where optimizer only follows(honors) the ORDER BY in the same scope or the same SELECT/UPDATE/DELETE statement. There is a possibility that one can order after the scope of the view again the efforts spend to order view will be wasted. The final resultset of the query always follows the final ORDER BY or outer query’s order and due to the same reason optimizer follows the final order of the query and not of the views (as view will be used in another query for further processing e.g. in SELECT statement). Due to same reason ORDER BY is now allowed in the view. For further accuracy and clear guidance I suggest you read this blog post by Query Optimizer Team. They have explained it very clear manner the same subject. 2) Reason for Auto Generated TOP (100) PERCENT One of the most popular workaround to above error is to use TOP (100) PERCENT in the view. Now TOP (100) PERCENT allows user to use ORDER BY in the query and allows user to overcome above error which we discussed. This gives the impression to the user that they have resolved the error and successfully able to use ORDER BY in the View. Well, this is incorrect as well. The way this works is when TOP (100) PERCENT is used the result is not guaranteed as well it is ignored in our the query where the view is used. Here is the blog post on this subject: Interesting Observation – TOP 100 PERCENT and ORDER BY. Now when you create a new view in the SSMS and build a query with ORDER BY to avoid the error automatically it adds the TOP 100 PERCENT. Here is the connect item for the same issue. I am sure there will be more connect items as well but I could not find them. 3) Potential Solutions If you are reading this post from the beginning in that case, it is clear by now that ORDER BY should not be used in the View as it does not serve any purpose unless there is a specific need of it. If you are going to use TOP 100 PERCENT with ORDER BY there is absolutely no need of using ORDER BY rather avoid using it all together. Here is another blog post of mine which describes the same subject ORDER BY Does Not Work – Limitation of the Views Part 1. It is valid to use ORDER BY in a view if there is a clear business need of using TOP with any other percentage lower than 100 (for example TOP 10 PERCENT or TOP 50 PERCENT etc). In most of the cases ORDER BY is not needed in the view and it should be used in the most outer query for present result in desired order. User can remove TOP 100 PERCENT and ORDER BY from the view before using the view in any query or procedure. In the most outer query there should be ORDER BY as per the business need. I think this sums up the concept in a few words. This is a very long topic and not easy to illustrate in one single blog post. I welcome your comments and suggestions. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, SQL View, T SQL, Technology

    Read the article

  • SQL SERVER – SSMS: Disk Usage Report

    - by Pinal Dave
    Let us start with humor!  I think we the series on various reports, we come to a logical point. We covered all the reports at server level. This means the reports we saw were targeted towards activities that are related to instance level operations. These are mostly like how a doctor diagnoses a patient. At this point I am reminded of a dialog which I read somewhere: Patient: Doc, It hurts when I touch my head. Doc: Ok, go on. What else have you experienced? Patient: It hurts even when I touch my eye, it hurts when I touch my arms, it even hurts when I touch my feet, etc. Doc: Hmmm … Patient: I feel it hurts when I touch anywhere in my body. Doc: Ahh … now I get it. You need a plaster to your finger John. Sometimes the server level gives an indicator to what is happening in the system, but we need to get to the root cause for a specific database. So, this is the first blog in series where we would start discussing about database level reports. To launch database level reports, expand selected server in Object Explorer, expand the Databases folder, and then right-click any database for which we want to look at reports. From the menu, select Reports, then Standard Reports, and then any of database level reports. In this blog, we would talk about four “disk” reports because they are similar: Disk Usage Disk Usage by Top Tables Disk Usage by Table Disk Usage by Partition Disk Usage This report shows multiple information about the database. Let us discuss them one by one.  We have divided the output into 5 different sections. Section 1 shows the high level summary of the database. It shows the space used by database files (mdf and ldf). Under the hood, the report uses, various DMVs and DBCC Commands, it is using sys.data_spaces and DBCC SHOWFILESTATS. Section 2 and 3 are pie charts. One for data file allocation and another for the transaction log file. Pie chart for “Data Files Space Usage (%)” shows space consumed data, indexes, allocated to the SQL Server database, and unallocated space which is allocated to the SQL Server database but not yet filled with anything. “Transaction Log Space Usage (%)” used DBCC SQLPERF (LOGSPACE) and shows how much empty space we have in the physical transaction log file. Section 4 shows the data from Default Trace and looks at Event IDs 92, 93, 94, 95 which are for “Data File Auto Grow”, “Log File Auto Grow”, “Data File Auto Shrink” and “Log File Auto Shrink” respectively. Here is an expanded view for that section. If default trace is not enabled, then this section would be replaced by the message “Trace Log is disabled” as highlighted below. Section 5 of the report uses DBCC SHOWFILESTATS to get information. Here is the enhanced version of that section. This shows the physical layout of the file. In case you have In-Memory Objects in the database (from SQL Server 2014), then report would show information about those as well. Here is the screenshot taken for a different database, which has In-Memory table. I have highlighted new things which are only shown for in-memory database. The new sections which are highlighted above are using sys.dm_db_xtp_checkpoint_files, sys.database_files and sys.data_spaces. The new type for in-memory OLTP is ‘FX’ in sys.data_space. The next set of reports is targeted to get information about a table and its storage. These reports can answer questions like: Which is the biggest table in the database? How many rows we have in table? Is there any table which has a lot of reserved space but its unused? Which partition of the table is having more data? Disk Usage by Top Tables This report provides detailed data on the utilization of disk space by top 1000 tables within the Database. The report does not provide data for memory optimized tables. Disk Usage by Table This report is same as earlier report with few difference. First Report shows only 1000 rows First Report does order by values in DMV sys.dm_db_partition_stats whereas second one does it based on name of the table. Both of the reports have interactive sort facility. We can click on any column header and change the sorting order of data. Disk Usage by Partition This report shows the distribution of the data in table based on partition in the table. This is so similar to previous output with the partition details now. Here is the query taken from profiler. SELECT row_number() OVER (ORDER BY a1.used_page_count DESC, a1.index_id) AS row_number ,      (dense_rank() OVER (ORDER BY a5.name, a2.name))%2 AS l1 ,      a1.OBJECT_ID ,      a5.name AS [schema] ,       a2.name ,       a1.index_id ,       a3.name AS index_name ,       a3.type_desc ,       a1.partition_number ,       a1.used_page_count * 8 AS total_used_pages ,       a1.reserved_page_count * 8 AS total_reserved_pages ,       a1.row_count FROM sys.dm_db_partition_stats a1 INNER JOIN sys.all_objects a2  ON ( a1.OBJECT_ID = a2.OBJECT_ID) AND a1.OBJECT_ID NOT IN (SELECT OBJECT_ID FROM sys.tables WHERE is_memory_optimized = 1) INNER JOIN sys.schemas a5 ON (a5.schema_id = a2.schema_id) LEFT OUTER JOIN  sys.indexes a3  ON ( (a1.OBJECT_ID = a3.OBJECT_ID) AND (a1.index_id = a3.index_id) ) WHERE (SELECT MAX(DISTINCT partition_number) FROM sys.dm_db_partition_stats a4 WHERE (a4.OBJECT_ID = a1.OBJECT_ID)) >= 1 AND a2.TYPE <> N'S' AND  a2.TYPE <> N'IT' ORDER BY a5.name ASC, a2.name ASC, a1.index_id, a1.used_page_count DESC, a1.partition_number Using all of the above reports, you should be able to get the usage of database files and also space used by tables. I think this is too much disk information for a single blog and I hope you have used them in the past to get data. Do let me know if you found anything interesting using these reports in your environments. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • Grow Your Business with Security

    - by Darin Pendergraft
    Author: Kevin Moulton Kevin Moulton has been in the security space for more than 25 years, and with Oracle for 7 years. He manages the East EnterpriseSecurity Sales Consulting Team. He is also a Distinguished Toastmaster. Follow Kevin on Twitter at twitter.com/kevin_moulton, where he sometimes tweets about security, but might also tweet about running, beer, food, baseball, football, good books, or whatever else grabs his attention. Kevin will be a regular contributor to this blog so stay tuned for more posts from him. It happened again! There I was, reading something interesting online, and realizing that a friend might find it interesting too. I clicked on the little email link, thinking that I could easily forward this to my friend, but no! Instead, a new screen popped up where I was asked to create an account. I was expected to create a User ID and password, not to mention providing some personally identifiable information, just for the privilege of helping that website spread their word. Of course, I didn’t want to have to remember a new account and password, I didn’t want to provide the requisite information, and I didn’t want to waste my time. I gave up, closed the web page, and moved on to something else. I was left with a bad taste in my mouth, and my friend might never find her way to this interesting website. If you were this content provider, would this be the outcome you were looking for? A few days later, I had a similar experience, but this one went a little differently. I was surfing the web, when I happened upon some little chotcke that I just had to have. I added it to my cart. When I went to buy the item, I was again brought to a page to create account. Groan! But wait! On this page, I also had the option to sign in with my OpenID account, my Facebook account, my Yahoo account, or my Google Account. I have all of those! No new account to create, no new password to remember, and no personally identifiable information to be given to someone else (I’ve already given it all to those other guys, after all). In this case, the vendor was easy to deal with, and I happily completed the transaction. That pleasant experience will bring me back again. This is where security can grow your business. It’s a differentiator. You’ve got to have a presence on the web, and that presence has to take into account all the smart phones everyone’s carrying, and the tablets that took over cyber Monday this year. If you are a company that a customer can deal with securely, and do so easily, then you are a company customers will come back to again and again. I recently had a need to open a new bank account. Every bank has a web presence now, but they are certainly not all the same. I wanted one that I could deal with easily using my laptop, but I also wanted 2-factor authentication in case I had to login from a shared machine, and I wanted an app for my iPad. I found a bank with all three, and that’s who I am doing business with. Let’s say, for example, that I’m in a regular Texas Hold-em game on Friday nights, so I move a couple of hundred bucks from checking to savings on Friday afternoons. I move a similar amount each week and I do it from the same machine. The bank trusts me, and they trust my machine. Most importantly, they trust my behavior. This is adaptive authentication. There should be no reason for my bank to make this transaction difficult for me. Now let's say that I login from a Starbucks in Uzbekistan, and I transfer $2,500. What should my bank do now? Should they stop the transaction? Should they call my home number? (My former bank did exactly this once when I was taking money out of an ATM on a business trip, when I had provided my cell phone number as my primary contact. When I asked them why they called my home number rather than my cell, they told me that their “policy” is to call the home number. If I'm on the road, what exactly is the use of trying to reach me at home to verify my transaction?) But, back to Uzbekistan… Should my bank assume that I am happily at home in New Jersey, and someone is trying to hack into my account? Perhaps they think they are protecting me, but I wouldn’t be very happy if I happened to be traveling on business in Central Asia. What if my bank were to automatically analyze my behavior and calculate a risk score? Clearly, this scenario would be outside of my typical behavior, so my risk score would necessitate something more than a simple login and password. Perhaps, in this case, a one-time password to my cell phone would prove that this is not just some hacker half way around the world. But, what if you're not a bank? Do you need this level of security? If you want to be a business that is easy to deal with while also protecting your customers, then of course you do. You want your customers to trust you, but you also want them to enjoy doing business with you. Make it easy for them to do business with you, and they’ll come back, and perhaps even Tweet about it, or Like you, and then their friends will follow. How can Oracle help? Oracle has the technology and expertise to help you to grown your business with security. Oracle Adaptive Access Manager will help you to prevent fraud while making it easier for your customers to do business with you by providing the risk analysis I discussed above, step-up authentication, and much more. Oracle Mobile and Social Access Service will help you to secure mobile access to applications by expanding on your existing back-end identity management infrastructure, and allowing your customers to transact business with you using the social media accounts they already know. You also have device fingerprinting and metrics to help you to grow your business securely. Security is not just a cost anymore. It’s a way to set your business apart. With Oracle’s help, you can be the business that everyone’s tweeting about. Image courtesy of Flickr user shareski

    Read the article

  • Language-agnostic term for typed things that need memory

    - by FredOverflow
    Is there an accepted general term that subsumes the concepts of variables, class instances and arrays? Basically "any typed thing that needs memory". In C++, such a thing is called an object, but I'm looking for a more language-agnostic term. § 1.8 The C++ object model 1 The constructs in a C++ program create, destroy, refer to, access, and manipulate objects. An object is a region of storage. [...] An object can have a name (Clause 3). An object has a storage duration (3.7) which influences its lifetime (3.8). An object has a type (3.9).

    Read the article

  • Memory management for NSURLConnection

    - by eman
    Sorry if this has been asked before, but I'm wondering what the best memory management practice is for NSURLConnection. Apple's sample code uses -[NSURLConnection initWithRequest:delegate:] in one method and then releases in connection:didFailWithError: or connectionDidFinishLoading:, but this spits out a bunch of analyzer warnings and seems sort of dangerous (what if neither of those methods is called?). I've been autoreleasing (using +[NSURLConnection connectionWithRequest:delegate:]), which seems cleaner, but I'm wondering--in this case, is it at ever possible for the NSURLConnection to be released before the connection has closed?

    Read the article

  • How to clone objects in NHibernate?

    - by Anry
    How to implement cloning of objects (entities) in NHibernate? In the classes of entities has properties such public virtual IList<Club> Clubs { get; set; } All classes are inherited from BaseObject. I tried to implement using xml serialization, but the interfaces are not serialized. Thank you for your answers!

    Read the article

  • LINQ 4 XML - What is the proper way to query deep in the tree structure?

    - by Keith Barrows
    I have an XML structure that is 4 deep: <?xml version="1.0" encoding="utf-8"?> <EmailRuleList xmlns:xsd="EmailRules.xsd"> <TargetPST name="Tech Communities"> <Parse emailAsList="true" useJustDomain="false" fromAddress="false" toAddress="true"> <EmailRule address="@aspadvice.com" folder="Lists, ASP" saveAttachments="false" /> <EmailRule address="@sqladvice.com" folder="Lists, SQL" saveAttachments="false" /> <EmailRule address="@xmladvice.com" folder="Lists, XML" saveAttachments="false" /> </Parse> <Parse emailAsList="false" useJustDomain="false" fromAddress="false" toAddress="true"> <EmailRule address="[email protected]" folder="Special Interest Groups|Northern Colorado Architects Group" saveAttachments="false" /> <EmailRule address="[email protected]" folder="Support|SpamBayes" saveAttachments="false" /> </Parse> <Parse emailAsList="false" useJustDomain="false" fromAddress="true" toAddress="false"> <EmailRule address="[email protected]" folder="Support|GoDaddy" saveAttachments="false" /> <EmailRule address="[email protected]" folder="Support|No-IP.com" saveAttachments="false" /> <EmailRule address="[email protected]" folder="Discussions|Orchard Project" saveAttachments="false" /> </Parse> <Parse emailAsList="false" useJustDomain="true" fromAddress="true" toAddress="false"> <EmailRule address="@agilejournal.com" folder="Newsletters|Agile Journal" saveAttachments="false"/> <EmailRule address="@axosoft.ccsend.com" folder="Newsletters|Axosoft Newsletter" saveAttachments="false"/> <EmailRule address="@axosoft.com" folder="Newsletters|Axosoft Newsletter" saveAttachments="false"/> <EmailRule address="@cmcrossroads.com" folder="Newsletters|CM Crossroads" saveAttachments="false" /> <EmailRule address="@urbancode.com" folder="Newsletters|Urbancode" saveAttachments="false" /> <EmailRule address="@urbancode.ccsend.com" folder="Newsletters|Urbancode" saveAttachments="false" /> <EmailRule address="@Infragistics.com" folder="Newsletters|Infragistics" saveAttachments="false" /> <EmailRule address="@zdnet.online.com" folder="Newsletters|ZDNet Tech Update Today" saveAttachments="false" /> <EmailRule address="@sqlservercentral.com" folder="Newsletters|SQLServerCentral.com" saveAttachments="false" /> <EmailRule address="@simple-talk.com" folder="Newsletters|Simple-Talk Newsletter" saveAttachments="false" /> </Parse> </TargetPST> <TargetPST name="[Sharpen the Saw]"> <Parse emailAsList="false" useJustDomain="false" fromAddress="false" toAddress="true"> <EmailRule address="[email protected]" folder="Head Geek|Job Alerts" saveAttachments="false" /> <EmailRule address="[email protected]" folder="Social|LinkedIn USMC" saveAttachments="false"/> </Parse> <Parse emailAsList="false" useJustDomain="false" fromAddress="true" toAddress="false"> <EmailRule address="[email protected]" folder="Head Geek|Job Alerts" saveAttachments="false" /> <EmailRule address="[email protected]" folder="Head Geek|Job Alerts" saveAttachments="false" /> <EmailRule address="[email protected]" folder="Social|Cruise Critic" saveAttachments="false"/> </Parse> <Parse emailAsList="false" useJustDomain="true" fromAddress="true" toAddress="false"> <EmailRule address="@moody.edu" folder="Social|5 Love Languages" saveAttachments="false" /> <EmailRule address="@postmaster.twitter.com" folder="Social|Twitter" saveAttachments="false"/> <EmailRule address="@diabetes.org" folder="Physical|American Diabetes Association" saveAttachments="false"/> <EmailRule address="@membership.webshots.com" folder="Social|Webshots" saveAttachments="false"/> </Parse> </TargetPST> </EmailRuleList> Now, I have both an FromAddress and a ToAddress that is parsed from an incoming email. I would like to do a LINQ query against a class set that was deserialized from this XML. For instance: ToAddress = [email protected] FromAddress = [email protected] Query: Get EmailRule.Include(Parse).Include(TargetPST) where address == ToAddress AND Parse.ToAddress==true AND Parse.useJustDomain==false Get EmailRule.Include(Parse).Include(TargetPST) where address == [ToAddress Domain Only] AND Parse.ToAddress==true AND Parse.useJustDomain==true Get EmailRule.Include(Parse).Include(TargetPST) where address == FromAddress AND Parse.FromAddress==true AND Parse.useJustDomain==false Get EmailRule.Include(Parse).Include(TargetPST) where address == [FromAddress Domain Only] AND Parse.FromAddress==true AND Parse.useJustDomain==true I am having a hard time figuring this LINQ query out. I can, of course, loop on all the bits in the XML like so (includes deserialization into objects): XmlSerializer s = new XmlSerializer(typeof(EmailRuleList)); TextReader r = new StreamReader(path); _emailRuleList = (EmailRuleList)s.Deserialize(r); TargetPST[] PSTList = _emailRuleList.Items; foreach (TargetPST targetPST in PSTList) { olRoot = GetRootFolder(targetPST.name); if (olRoot != null) { Parse[] ParseList = targetPST.Items; foreach (Parse parseRules in ParseList) { EmailRule[] EmailRuleList = parseRules.Items; foreach (EmailRule targetFolders in EmailRuleList) { } } } } However, this means going through all these loops for each and every address. It makes more sense to me to query against the Objects. Any tips appreciated!

    Read the article

  • Storing objects in the array

    - by Ockonal
    Hello, I want to save boost signals objects in the map (association: signal name ? signal object). The signals signature is different, so the second type of map should be boost::any. map<string, any> mSignalAssociation; The question is how to store objects without defining type of new signal signature? typedef boost::signals2::signal<void (int KeyCode)> sigKeyPressed; mSignalAssociation.insert(make_pair("KeyPressed", sigKeyPressed())); // This is what I need: passing object without type definition mSignalAssociation["KeyPressed"] = (typename boost::signals2::signal<void (int KeyCode)>()); // One more trying which won't work. And I don't want use this sigKeyPressed mKeyPressed; mSignalAssociation["KeyPressed"] = mKeyPressed; All this tryings throw the error: /usr/include/boost/noncopyable.hpp: In copy constructor ‘boost::signals2::signal_base::signal_base(const boost::signals2::signal_base&)’: In file included from /usr/include/boost/signals2/detail/signals_common.hpp:17:0, /usr/include/boost/noncopyable.hpp:27:7: error: ‘boost::noncopyable_::noncopyable::noncopyable(const boost::noncopyable_::noncopyable&)’ is private /usr/include/boost/signals2/signal_base.hpp:22:5: error: within this context ---------- /usr/include/boost/signals2/detail/signal_template.hpp: In copy constructor ‘boost::signals2::signal1<void, int&, boost::signals2::optional_last_value<void>, int, std::less<int>, boost::function<void(int)>, boost::function<void(const boost::signals2::connection&, int)>, boost::signals2::mutex>::signal1(const boost::signals2::signal1<void, int, boost::signals2::optional_last_value<void>, int, std::less<int>, boost::function<void(int)>, boost::function<void(const boost::signals2::connection&, int)>, boost::signals2::mutex>&)’: In file included from /usr/include/boost/preprocessor/iteration/detail/iter/forward1.hpp:52:0, /usr/include/boost/signals2/detail/signal_template.hpp:578:5: note: synthesized method ‘boost::signals2::signal_base::signal_base(const boost::signals2::signal_base&)’ first required here from /usr/include/boost/signals2.hpp:16, --------- /usr/include/boost/signals2/preprocessed_signal.hpp: In copy constructor ‘boost::signals2::signal<void(int)>::signal(const boost::signals2::signal<void(int)>&)’: In file included from /usr/include/boost/signals2/signal.hpp:36:0, /usr/include/boost/signals2/preprocessed_signal.hpp:42:5: note: synthesized method ‘boost::signals2::signal1<void, int, boost::signals2::optional_last_value<void>, int, std::less<int>, boost::function<void(int)>, boost::function<void(const boost::signals2::connection&, int)>, boost::signals2::mutex>::signal1(const boost::signals2::signal1<void, int, boost::signals2::optional_last_value<void>, int, std::less<int>, boost::function<void(int)>, boost::function<void(const boost::signals2::connection&, int)>, boost::signals2::mutex>&)’ first required here from /home/ockonal/Workspace/Projects/Pseudoform-2/include/Core/Systems.hpp:6,

    Read the article

  • Ruby: Passing optional objects to method

    - by Sam
    Class TShirt def size(suggested_size) if suggested_size == nil size = "please choose a size" else size = suggested end end end tshirt = TShirt.new tshirt.size("M") == "M" tshirt = TShirt.new size = tshirt.size(nil) == "please choose a size" What is a better way to have optional objects in a method? Procs?

    Read the article

  • Forcing the GC to collect JNI proxy objects

    - by SyBer
    Hi. While I do my best to clean JNI objects to free native memory in the end of the usage, there are still some that hang around for a long time, wasting system native memory. Is there any way to force the GC to give priority in collection of these JNI proxies? I mean is there a way to cause GC to concentrate on a particular kind of object, namely the JNI proxies? Thanks.

    Read the article

  • SQLCMD Syntax error when running script that runs fine in management studio

    - by Troy
    When I run the this select '$(''test'')' in SQL Management Studio 2008 it returns $('test') When I run sqlcmd -S SERVER -E -d DATABASE -q "select "$(''test'')" on the command line it returns Sqlcmd: Error: Syntax error at line 1 near command '''. If I remove the dollar sign it works. Is the "$" a special character? Is this a sqlcmd bug? How can I change the script to get the desired result.

    Read the article

  • Accessing C# Anonymous Type Objects

    - by Ali Kazmi
    Hi, How do i access objects of an anonymous type outside the scope where its declared? for e.g. void FuncB() { var obj = FuncA(); Console.WriteLine(obj.Name); } ??? FuncA() { var a = (from e in DB.Entities where e.Id == 1 select new {Id = e.Id, Name = e.Name}).FirstOrDefault(); return a; }

    Read the article

  • How are distributed services better than distributed objects?

    - by Gabriel Šcerbák
    I am not interested in the technology e.g. CORBA vs Web Services, I am interested in principles. When we are doing OOP, why should we have something so procedural at higher level? Is not it the same as with OOP and relational databases? Often services are supported through code generation, apart from boilerplate, I think it is because we new SOM - service object mapper. So again, what are the reasons for wervices rather than objects?

    Read the article

  • Convincing Management to use WCF

    - by Kyle Rozendo
    Hi All, I asked another question, stating that the I am unable to use WCF in a new Web Service project and have come to see that it seems to be the technology to use in regards to extensibility and security. My question is a simple one: How can I persuade management (and developers for that matter) to move over to using WCF as opposed to other web service technologies (ASMX/CFC)? They won't be concerned with a comparison between the types (especially with regard to ASMX), simply what there is to gain. Thanks for your time, Kyle

    Read the article

  • SQLCMD Restore works in Management Studio but not from DOS prompt

    - by Gautam
    Any idea why my Restore command works fine when run in Management Studio 2008 but not when run from the dos prompt? Shown below is the error when running from the dos prompt. C:\>SQLCMD -s local\SQL2008 -d master -Q "RESTORE DATABASE [Sample.Db] FROM DISK = N'C:\Sample.Db.bak' WITH FILE = 1, MOVE N'Sample.Db' TO N'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\Sample.Db.mdf', MOVE N'Sample.Db_log' TO N'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\Sample.Db_log.ldf', NOUNLOAD, REPLACE, STATS = 10" Msg 3634, Level 16, State 1, Server GAUTAM, Line 1 The operating system returned the error '32(The process cannot access the file because it is being used by another process.)' while attempting 'RestoreContainer::ValidateTargetForCreation' on 'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\Sample.Db.mdf'. Msg 3156, Level 16, State 8, Server GAUTAM, Line 1 File 'Sample.Db' cannot be restored to 'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\Sample.Db.mdf'. Use WITH MOVE to identify a valid location for the file. Msg 3634, Level 16, State 1, Server GAUTAM, Line 1 The operating system returned the error '32(The process cannot access the file because it is being used by another process.)' while attempting 'RestoreContainer::ValidateTargetForCreation' on 'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\Sample.Db_log.ldf'. Msg 3156, Level 16, State 8, Server GAUTAM, Line 1 File 'Sample.Db_log' cannot be restored to 'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\Sample.Db_log.ldf'. Use WITH MOVE to identify a valid location for the file. Msg 3119, Level 16, State 1, Server GAUTAM, Line 1 Problems were identified while planning for the RESTORE statement. Previous messages provide details. Msg 3013, Level 16, State 1, Server GAUTAM, Line 1 RESTORE DATABASE is terminating abnormally. However if I execute this directly in Management Studio 2008, it works fine: RESTORE DATABASE [Sample.Db] FROM DISK = N'C:\Sample.Db.bak' WITH FILE = 1, MOVE N'Sample.Db' TO N'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\Sample.Db.mdf', MOVE N'Sample.Db_log' TO N'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\Sample.Db_log.ldf', NOUNLOAD, REPLACE, STATS = 10 There is no lock or security issues, the data base doesn't exist on the server. I can't figure it out. Any ideas?

    Read the article

  • Convert JSON query parameters to objects with JAX-RS

    - by deamon
    I have a JAX-RS resource, which gets its paramaters as a JSON string like this: http://some.test/aresource?query={"paramA":"value1", "paramB":"value2"} The reason to use JSON here, is that the query object can be quite complex in real use cases. I'd like to convert the JSON string to a Java object, dto in the example: @GET @Produces("text/plain") public String getIt(@QueryParam("query") DataTransferObject dto ) { ... } Does JAX-RS support such a conversion from JSON passed as a query param to Java objects?

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >