Search Results

Search found 1886 results on 76 pages for 'vendor neutrality'.

Page 15/76 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • JPA - FindByExample

    - by HDave
    Does anyone have a good example for how to do a findByExample in JPA? I know I can do it via my provider (Hibernate), but I don't want to break with neutrality... Seems like the criteria API might be the way to go....but I am not sure.

    Read the article

  • Thoughts on Build 2013

    - by D'Arcy Lussier
    Originally posted on: http://geekswithblogs.net/dlussier/archive/2013/06/30/153294.aspxAnd so another Build conference has come to an end. Below are my thoughts/perspectives on various aspects of the event. I’ll do a separate blog post on my thoughts of the Build message for developers. The Good Moscone center was a great venue for Build! Easy to get around, easy to get to, and well maintained, it was a very comfortable conference venue. Yeah, the free swag was nice. Build has built up an expectation that attendees will always get something; it’ll be interesting to see how Microsoft maintains this expectation over the next few Build events. I still maintain that free swag should never be the main reason one attends an event, and for me this was definitely just an added bonus. I’m planning on trying to use the Surface as a dedicated 2nd device at work for meetings, I’ll share my experiences over the next few months. The hackathon event was a great idea, although personally I couldn’t justify spending the money on a conference registration just to spend the entire conference coding. Still, the apps that were created were really great and there was a lot of passion and excitement around the hackathon. I wonder if they couldn’t have had the hackathon on the Monday/Tuesday for those that wanted to participate so they didn’t miss any of the actual conference over Wed/Thurs. San Francisco was a great city to host Build. Getting from hotels to the conference center was very easy (well especially for me, I was only 3 blocks away) and the city itself felt very safe. However, if I never have to fly into SFO again I’ll be alright with that! Delays going into and out of SFO and both apparently were due to the airport itself. The Bad Build is one of those oddities on the conference landscape where people will pay to commit to attending an event without knowing anything about the sessions. We got our list of conference sessions when we registered on Tuesday, not before. And even then, we only got titles and not descriptions (those were eventually made available via the conference’s mobile application). I get it…they’re going to make announcements and they don’t want to give anything away through the session titles. But honestly, there wasn’t anything in the session titles that I would have considered a surprise. Breakfasts were brutal. High-carb pastries, donuts, and muffins with fruit and hard boiled eggs does not a conference breakfast make. I can’t believe that the difference between a continental breakfast per person and a hot breakfast buffet would have been a huge impact to a conference fee that was already around $2000. The vendor area was anemic. I don’t know why Microsoft forces the vendors into cookie-cutter booth areas (this year they were all made of plywood material). WPC, TechEd – booth areas there allow the vendors to be creative with their displays. Not so much for Build. Really odd was the lack of Microsoft’s own representation around Bing. In the day 1 keynote Microsoft made a big deal about Bing as an API. Yet there was nobody in the vendor area set up to provide more information or have discussions with about the Bing API. The Ugly Our name badges were NFC enabled. The purpose of this, beyond the vendors being able to scan your info, wasn’t really made clear. An attendee I talked to showed how you could get a reader app on your phone so you can scan other members cards and collect their contact info – which is a kewl idea; business cards are so 1990’s. But I was *shocked* at the amount of information that was on our name badges! Here’s what’s displayed on our name badge: - Name - Company - Twitter Handle I’m ok with that. But here’s what actually gets read: - Name - Company - Address Used for Registration - Phone Number Used for Registration So sharing that info with another attendee, they get way more of my info than just how to find me on Twitter! Microsoft, you need to fix this for the future. If vendors want to collect information on attendees, they should be able to collect an ID from the badge, then get a report with corresponding records afterwards. My personal information should not be so readily available, and without my knowledge! Final Verdict Maybe its my older age, maybe its where I’m at in life with family, maybe its where I’m at in my career, but when I consider whether a conference experience was valuable I get to the core reason I attend: opportunities to learn, opportunities to network, opportunities to engage with Microsoft. Opportunities to Learn:  Sessions I attended were generally OK, with some really stand out ones on Day 2. I would love to see Microsoft adopt the Dojo format for a portion of their sessions. Hands On Labs are dull, lecture style sessions are great for information sharing. But a guided hands-on coding session (Read: Dojo) provides the best of both worlds. Being that all content is publically available online to everyone (Build attendee or not), the value of attending the conference sessions is decreased. The value though is in the discussions that take part in person afterwards, which leads to… Opportunities to Network: I enjoyed getting together with old friends and connecting with Twitter friends in person for the first time. I also had an opportunity to meet total strangers. So from a networking perspective, Build was fantastic! I still think it would have been great to have an area for ad-hoc discussions – where speakers could announce they’d be available for more questions after their sessions, or attendees who wanted to discuss more in depth on a topic with other attendees could arrange space. Some people have no problems being outgoing and making these things happen, but others are not and a structured model is more attractive. Opportunities to Engage with Microsoft: Hit and miss on this one. Outside of the vendor area, unless you cornered or reached out to a speaker, there wasn’t any defined way to connect with blue badges. And as I mentioned above, Microsoft didn’t have full representation in the vendor area (no Bing). All in all, Build was a fun party where I was informed about some new stuff and got some free swag. Was it worth the time away from home and the hit to my PD budget? I’d say Somewhat. Build is a great informational conference, but I wouldn’t call it a learning conference. Considering that TechEd seems to be moving to more of an IT Pro focus, independent developer conferences seem to be the best value for those looking to learn and not just be informed. With the rapid development cycle Microsoft is embracing, we’re already seeing Build happening twice within a 12 month period. If that continues, the value of attending Build in person starts to diminish – especially with so much content available online. If Microsoft wants Build to be a must-attend event in the future, they need to start incorporating aspects of Tech Ed, past PDCs, and other conferences so those that want to leave with more than free swag have something to attract them.

    Read the article

  • The True Cost of a Solution

    - by D'Arcy Lussier
    I had a Twitter chat recently with someone suggesting Oracle and SQL Server were losing out to OSS (Open Source Software) in the enterprise due to their issues with scaling or being too generic (one size fits all). I challenged that a bit, as my experience with enterprise sized clients has been different – adverse to OSS but receptive to an established vendor. The response I got was: Found it easier to influence change by showing how X can’t solve our problems or X is extremely costly to scale. Money talks. I think this is definitely the right approach for anyone pitching an alternate or alien technology as part of a solution: identify the issue, identify the solution, then present pros and cons including a cost/benefit analysis. What can happen though is we get tunnel vision and don’t present a full view of the costs associated with a solution. An “Acura”te Example (I’m so clever…) This is my dream vehicle, a Crystal Black Pearl coloured Acura MDX with the SH-AWD package! We’re a family of 4 (5 if my daughters ever get their wish of adding a dog), and I’ve always wanted a luxury type of vehicle, so this is a perfect replacement in a few years when our Rav 4 has hit the 8 – 10 year mark. MSRP – $62,890 But as we all know, that’s not *really* the cost of the vehicle. There’s taxes and fees added on, there’s the extended warranty if I choose to purchase it, there’s the finance rate that needs to be factored in… MSRP –   $62,890 Taxes –      $7,546 Warranty - $2,500 SubTotal – $72,936 Finance Charge – $ 1094.04 Grand Total – $74,030 Well! Glad we did that exercise – we discovered an extra $11k added on to the MSRP! Well now we have our true price…or do we? Lifetime of the Vehicle I’m expecting to have this vehicle for 7 – 10 years. While the hard cost of the vehicle is known and dealt with, the costs to run and maintain the vehicle are on top of this. I did some research, and here’s what I’ve found: Fuel and Mileage Gas prices are high as it is for regular fuel, but getting into an MDX will require that I *only* purchase premium fuel, which comes at a premium price. I need to expect my bill at the pump to be higher. Comparing the MDX to my 2007 Rav4 also shows I’ll be gassing up more often. The Rav4 has a city MPG of 21, while the MDX plummets to 16! The MDX does have a bigger fuel tank though, so all in all the number of times I hit the pumps might even out. Still, I estimate I’ll be spending approximately $8000 – $10000 more on gas over a 10 year period than my current Rav4. Service Options Limited Although I have options with my Toyota here in Winnipeg (we have 4 Toyota dealerships), I do go to my original dealer for any service work. Still, I like the fact that I have options. However, there’s only one Acura dealership in all of Winnipeg! So if, for whatever reason, I’m not satisfied with the level of service I’m stuck. Non Warranty Service Work Also let’s not forget that there’s a bulk of work required every year that is *not* covered under warranty – oil changes, tire rotations, brake pads, etc. I expect I’ll need to get new tires at the 5 years mark as well, which can easily be $1200 – $1500 (I just paid $1000 for new tires for the Rav4 and we’re at the 5 year mark). Now these aren’t going to be *new* costs that I’m not used to from our existing vehicles, but they should still be factored in. I’d budget $500/year, or $5000 over the 10 years I’ll own the vehicle. Final Assessment So let’s re-assess the true cost of my dream MDX: MSRP                    $62,890 Taxes                       $7,546 Warranty                 $2,500 Finance Charge         $1094 Gas                        $10,000 Service Work            $5000 Grand Total           $89,030 So now I have a better idea of 10 year cost overall, and I’ve identified some concerns with local service availability. And there’s now much more to consider over the original $62,890 price tag. Tying This Back to Technology Solutions The process that we just went through is no different than what organizations do when considering implementing a new system, technology, or technology based solution, within their environments. It’s easy to tout the short term cost savings of particular product/platform/technology in a vacuum. But its when you consider the wider impact that the true cost comes into play. Let’s create a scenario: A company is not happy with its current data reporting suite. An employee suggests moving to an open source solution. The selling points are: - Because its open source its free - The organization would have access to the source code so they could alter it however they wished - It provided features not available with the current reporting suite At first this sounds great to the management and executive, but then they start asking some questions and uncover more information: - The OSS product is built on a technology not used anywhere within the organization - There are no vendors offering product support for the OSS product - The OSS product requires a specific server platform to operate on, one that’s not standard in the organization All of a sudden, the true cost of implementing this solution is starting to become clearer. The company might save money on licensing costs, but their training costs would increase significantly – developers would need to learn how to develop in the technology the OSS solution was built on, IT staff must learn how to set up and maintain a new server platform within their existing infrastructure, and if a problem was found there was no vendor to contact for support. The true cost of implementing a “free” OSS solution is actually spinning up a project to implement it within the organization – no small cost. And that’s just the short-term cost. Now the organization must ensure they maintain trained staff who can make changes to the OSS reporting solution and IT staff that will stay knowledgeable in the new server platform. If those skills are very niche, then higher labour costs could be incurred if those people are hard to find or if trained employees use that knowledge as leverage for higher pay. Maybe a vendor exists that will contract out support, but then there are those costs to consider as well. And let’s not forget end-user training – in our example, anyone that runs reports will need to be trained on how to use the new system. Here’s the Point We still tend to look at software in an “off the shelf” kind of way. It’s very easy to say “oh, this product is better than vendor x’s product – and its free because its OSS!” but the reality is that implementing any new technology within an organization has a cost regardless of the retail price of the product. Training, integration, support – these are real costs that impact an organization and span multiple departments. Whether you’re pitching an improved business process, a new system, or a new technology, you need to consider the bigger picture costs of implementation. What you define as success (in our example, having better reporting functionality) might not be what others define as success if implementing your solution causes them issues. A true enterprise solution needs to consider the entire enterprise.

    Read the article

  • GPGPU

    WhatGPU obviously stands for Graphics Processing Unit (the silicon powering the display you are using to read this blog post). The extra GP in front of that stands for General Purpose computing.So, altogether GPGPU refers to computing we can perform on GPU for purposes beyond just drawing on the screen. In effect, we can use a GPGPU a bit like we already use a CPU: to perform some calculation (that doesn’t have to have any visual element to it). The attraction is that a GPGPU can be orders of magnitude faster than a CPU.WhyWhen I was at the SuperComputing conference in Portland last November, GPGPUs were all the rage. A quick online search reveals many articles introducing the GPGPU topic. I'll just share 3 here: pcper (ignoring all pages except the first, it is a good consumer perspective), gizmodo (nice take using mostly layman terms) and vizworld (answering the question on "what's the big deal").The GPGPU programming paradigm (from a high level) is simple: in your CPU program you define functions (aka kernels) that take some input, can perform the costly operation and return the output. The kernels are the things that execute on the GPGPU leveraging its power (and hence execute faster than what they could on the CPU) while the host CPU program waits for the results or asynchronously performs other tasks.However, GPGPUs have different characteristics to CPUs which means they are suitable only for certain classes of problem (i.e. data parallel algorithms) and not for others (e.g. algorithms with branching or recursion or other complex flow control). You also pay a high cost for transferring the input data from the CPU to the GPU (and vice versa the results back to the CPU), so the computation itself has to be long enough to justify the overhead transfer costs. If your problem space fits the criteria then you probably want to check out this technology.HowSo where can you get a graphics card to start playing with all this? At the time of writing, the two main vendors ATI (owned by AMD) and NVIDIA are the obvious players in this industry. You can read about GPGPU on this AMD page and also on this NVIDIA page. NVIDIA's website also has a free chapter on the topic from the "GPU Gems" book: A Toolkit for Computation on GPUs.If you followed the links above, then you've already come across some of the choices of programming models that are available today. Essentially, AMD is offering their ATI Stream technology accessible via a language they call Brook+; NVIDIA offers their CUDA platform which is accessible from CUDA C. Choosing either of those locks you into the GPU vendor and hence your code cannot run on systems with cards from the other vendor (e.g. imagine if your CPU code would run on Intel chips but not AMD chips). Having said that, both vendors plan to support a new emerging standard called OpenCL, which theoretically means your kernels can execute on any GPU that supports it. To learn more about all of these there is a website: gpgpu.org. The caveat about that site is that (currently) it completely ignores the Microsoft approach, which I touch on next.On Windows, there is already a cross-GPU-vendor way of programming GPUs and that is the DirectX API. Specifically, on Windows Vista and Windows 7, the DirectX 11 API offers a dedicated subset of the API for GPGPU programming: DirectCompute. You use this API on the CPU side, to set up and execute the kernels that run on the GPU. The kernels are written in a language called HLSL (High Level Shader Language). You can use DirectCompute with HLSL to write a "compute shader", which is the term DirectX uses for what I've been referring to in this post as a "kernel". For a comprehensive collection of links about this (including tutorials, videos and samples) please see my blog post: DirectCompute.Note that there are many efforts to build even higher level languages on top of DirectX that aim to expose GPGPU programming to a wider audience by making it as easy as today's mainstream programming models. I'll mention here just two of those efforts: Accelerator from MSR and Brahma by Ananth. Comments about this post welcome at the original blog.

    Read the article

  • Ubuntu 10.04 & IBM DS3524 with FC multipath, inactive path is [failed][faulty] instead of [active][ghost]

    - by Graeme Donaldson
    OK, this is my setup: FC Switches IBM/Brocade, Switch1 and Switch2, independent fabrics. Server IBM x3650 M2, 2x QLogic QLE2460, 1 connected to each FC Switch. Storage IBM DS3524, 2x controllers with 4x FC ports each, but only 2x connected on each. +-----------------------------------------------------------------------+ | HBA1 Server HBA2 | +-----------------------------------------------------------------------+ | | | | | | +-----------------------------+ +------------------------------+ | Switch1 | | Switch2 | +-----------------------------+ +------------------------------+ | | | | | | | | | | | | | | | | | | | | +-----------------------------------+-----------------------------------+ | Contr A, port 3 | Contr A, port 4 | Contr B, port 3 | Contr B, port 4 | +-----------------------------------+-----------------------------------+ | Storage | +-----------------------------------------------------------------------+ My /etc/multipath.conf is from the IBM redbook for the DS3500, except I use a different setting for prio_callout, IBM uses /sbin/mpath_prio_tpc, but according to http://changelogs.ubuntu.com/changelogs/pool/main/m/multipath-tools/multipath-tools_0.4.8-7ubuntu2/changelog, this was renamed to /sbin/mpath_prio_rdac, which I'm using. devices { device { #ds3500 vendor "IBM" product "1746 FAStT" hardware_handler "1 rdac" path_checker rdac failback 0 path_grouping_policy multibus prio_callout "/sbin/mpath_prio_rdac /dev/%n" } } multipaths { multipath { wwid xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx alias array07 path_grouping_policy multibus path_checker readsector0 path_selector "round-robin 0" failback "5" rr_weight priorities no_path_retry "5" } } The output of multipath -ll with controller A as the preferred path: root@db06:~# multipath -ll sdg: checker msg is "directio checker reports path is down" sdh: checker msg is "directio checker reports path is down" array07 (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) dm-2 IBM ,1746 FASt [size=4.9T][features=1 queue_if_no_path][hwhandler=0] \_ round-robin 0 [prio=2][active] \_ 5:0:1:0 sdd 8:48 [active][ready] \_ 5:0:2:0 sde 8:64 [active][ready] \_ 6:0:1:0 sdg 8:96 [failed][faulty] \_ 6:0:2:0 sdh 8:112 [failed][faulty] If I change the preferred path using IBM DS Storage Manager to Controller B, the output swaps accordingly: root@db06:~# multipath -ll sdd: checker msg is "directio checker reports path is down" sde: checker msg is "directio checker reports path is down" array07 (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) dm-2 IBM ,1746 FASt [size=4.9T][features=1 queue_if_no_path][hwhandler=0] \_ round-robin 0 [prio=2][active] \_ 5:0:1:0 sdd 8:48 [failed][faulty] \_ 5:0:2:0 sde 8:64 [failed][faulty] \_ 6:0:1:0 sdg 8:96 [active][ready] \_ 6:0:2:0 sdh 8:112 [active][ready] According to IBM, the inactive path should be "[active][ghost]", not "[failed][faulty]". Despite this, I don't seem to have any I/O issues, but my syslog is being spammed with this every 5 seconds: Jun 1 15:30:09 db06 multipathd: sdg: directio checker reports path is down Jun 1 15:30:09 db06 kernel: [ 2350.282065] sd 6:0:2:0: [sdh] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jun 1 15:30:09 db06 kernel: [ 2350.282071] sd 6:0:2:0: [sdh] Sense Key : Illegal Request [current] Jun 1 15:30:09 db06 kernel: [ 2350.282076] sd 6:0:2:0: [sdh] <<vendor>> ASC=0x94 ASCQ=0x1ASC=0x94 ASCQ=0x1 Jun 1 15:30:09 db06 kernel: [ 2350.282083] sd 6:0:2:0: [sdh] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 Jun 1 15:30:09 db06 kernel: [ 2350.282092] end_request: I/O error, dev sdh, sector 0 Jun 1 15:30:10 db06 multipathd: sdh: directio checker reports path is down Jun 1 15:30:14 db06 kernel: [ 2355.312270] sd 6:0:1:0: [sdg] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jun 1 15:30:14 db06 kernel: [ 2355.312277] sd 6:0:1:0: [sdg] Sense Key : Illegal Request [current] Jun 1 15:30:14 db06 kernel: [ 2355.312282] sd 6:0:1:0: [sdg] <<vendor>> ASC=0x94 ASCQ=0x1ASC=0x94 ASCQ=0x1 Jun 1 15:30:14 db06 kernel: [ 2355.312290] sd 6:0:1:0: [sdg] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 Jun 1 15:30:14 db06 kernel: [ 2355.312299] end_request: I/O error, dev sdg, sector 0 Does anyone know how I can get the inactive path to show "[active][ghost]" instead of "[failed][faulty]"? I assume that once I can get that right then the spam in my syslog will end as well. One final thing worth mentioning is that the IBM redbook doc targets SLES 11 so I'm assuming there's something a little different under Ubuntu that I just haven't figured out yet. Update: As suggested by Mitch, I've tried removing /etc/multipath.conf, and now the output of multipath -ll looks like this: root@db06:~# multipath -ll sdg: checker msg is "directio checker reports path is down" sdh: checker msg is "directio checker reports path is down" xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxdm-1 IBM ,1746 FASt [size=4.9T][features=0][hwhandler=0] \_ round-robin 0 [prio=1][active] \_ 5:0:2:0 sde 8:64 [active][ready] \_ round-robin 0 [prio=1][enabled] \_ 5:0:1:0 sdd 8:48 [active][ready] \_ round-robin 0 [prio=0][enabled] \_ 6:0:1:0 sdg 8:96 [failed][faulty] \_ round-robin 0 [prio=0][enabled] \_ 6:0:2:0 sdh 8:112 [failed][faulty] So its more or less the same, with the same message in the syslog every 5 minutes as before, but the grouping has changed.

    Read the article

  • Backing up my Windows Home Server to the Cloud&hellip;

    - by eddraper
    Ok, here’s my scenario: Windows Home Server with a little over 3TB of storage.  This includes many years of our home network’s PC backups, music, videos, etcetera. I’d like to get a backup off-site, and the existing APIs and apps such as CloudBerry Labs WHS Backup service are making it easy.  Now, all it’s down to is vendor and the cost of the actual storage.   So,  I thought I’d take a lazy Saturday morning and do some research on this and get the ball rolling.  What I discovered stunned me…   First off, the pricing for just about everything was loaded with complexity.  I learned that it wasn’t just about storage… it was about network usage, requests, sites, replication, and on and on. I really don’t see this as rocket science.  I have a disk image.  I want to put it in the cloud.  I’m not going to be be using it but once daily for incremental backups.  Sounds like a common scenario.  Yes, if “things get real” and my server goes down, I will need to bring down a lot of data and utilize a fair amount of vendor infrastructure.  However, this may never happen.  Offsite storage is an insurance policy.   The complexity of the cost structures, perhaps by design, create an environment where it’s incredibly hard to model bottom line costs and compare vendor all-up pricing.  As it is a “lazy Saturday morning,” I’m not in the mood for such antics and I decide to shirk the endeavor entirely.  Thus, I decided to simply fire up calc.exe and do some a simple arithmetic model based on price per GB.  I shuddered at the results.  Certainly something was wrong… did I misplace a decimal point?  Then I discovered CloudBerry’s own calculator.   Nope, I hadn’t misplaced those decimals after all.  Check it out (pricing based on 3174 GB):   Amazon S3 $398.00 per month $4761 per year Azure $396.75 per month $4761 per year Google $380.88 per month $4570.56 per year   Conclusion: Rampant crack smoking at vendors.  Seriously.  Out. Of. Their. Minds. Now, to Amazon’s credit, vision, and outright common sense, they had one offering which directly addresses my scenario:   Amazon Glacier $31.74 per month $380.88 per year   hmmm… It’s on the table.  Let’s see what it would cost to just buy some drives, an enclosure and cart them over to a friend’s house.   2 x 2TB Drives from NewEgg.com $199.99   Enclosure $39.99     $239.98   Carting data to back and forth to friend’s within walking distance pain   Leave drive unplugged at friend’s $0 for electricity   Possible data loss No way I can come and go every day.     I think I’ll think on this a bit more…

    Read the article

  • SQL SERVER – MSQL_XP – Wait Type – Day 20 of 28

    - by pinaldave
    In this blog post, I am going to discuss something from my field experience. While consultation, I have seen various wait typed, but one of my customers who has been using SQL Server for all his operations had an interesting issue with a particular wait type. Our customer had more than 100+ SQL Server instances running and the whole server had MSSQL_XP wait type as the most number of wait types. While running sp_who2 and other diagnosis queries, I could not immediately figure out what the issue was because the query with that kind of wait type was nowhere to be found. After a day of research, I was relieved that the solution was very easy to figure out. Let us continue discussing this wait type. From Book On-Line: ?MSQL_XP occurs when a task is waiting for an extended stored procedure to end. SQL Server uses this wait state to detect potential MARS application deadlocks. The wait stops when the extended stored procedure call ends. MSQL_XP Explanation: This wait type is created because of the extended stored procedure. Extended Stored Procedures are executed within SQL Server; however, SQL Server has no control over them. Unless you know what the code for the extended stored procedure is and what it is doing, it is impossible to understand why this wait type is coming up. Reducing MSQL_XP wait: As discussed, it is hard to understand the Extended Stored Procedure if the code for it is not available. In the scenario described at the beginning of this post, our client was using third-party backup tool. The third-party backup tool was using Extended Stored Procedure. After we learned that this wait type was coming from the extended stored procedure of the backup tool they were using, we contacted the tech team of its vendor. The vendor admitted that the code was not optimal at some places, and within that day they had provided the patch. Once the updated version was installed, the issue on this wait type disappeared. As viewed in the wait statistics of all the 100+ SQL Server, there was no more MSSQL_XP wait type found. In simpler terms, you must first identify which Extended Stored Procedure is creating the wait type of MSSQL_XP and see if you can get in touch with the creator of the SP so you can help them optimize the code. If you have encountered this MSSQL_XP wait type, I encourage all of you to write how you managed it. Please do not mention the name of the vendor in your comment as I will not approve it. The focus of this blog post is to understand the wait types; not talk about others. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • How to check if a cdrom is in the tray remotely (via ssh)?

    - by adempewolff
    I have a server running Ubuntu 10.04 (it's on the other side of the world and I haven't built up the wherewithal to upgrade it remotely yet) and I have been told that there is a CD in one of it's two CD drives. I want to rip an image of the cd and then download it to my local computer (I don't need help with either of these steps). However, I cannot seem to confirm whether or not there actually is a CD in the drive as I was told. It did not automatically mount anywhere (which I'm thinking might just be a result of it being a headless server not running X, nautilus, or any of the other nice user friendly things). There are two CD drives connected via SCSI: austin@austinvpn:/proc/scsi$ cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: ATA Model: WDC WD400EB-75CP Rev: 06.0 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: Lite-On Model: LTN486S 48x Max Rev: YDS6 Type: CD-ROM ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 01 Lun: 00 Vendor: SAMSUNG Model: CD-R/RW SW-248F Rev: R602 Type: CD-ROM ANSI SCSI revision: 05 However when I try mounting either of these devices (and every other device that could possibly be the cd-drive), it says no medium found: austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/scd1 /cdrom mount: no medium found on /dev/sr1 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/scd0 /cdrom mount: no medium found on /dev/sr0 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/cdrom /cdrom mount: no medium found on /dev/sr1 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/cdrom1 /cdrom mount: no medium found on /dev/sr0 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/cdrw /cdrom mount: no medium found on /dev/sr1 Here are the contents of my /dev folder: austin@austinvpn:/proc/scsi$ ls /dev agpgart loop6 ram6 tty10 tty38 tty8 austinvpn loop7 ram7 tty11 tty39 tty9 block lp0 ram8 tty12 tty4 ttyS0 bsg mapper ram9 tty13 tty40 ttyS1 btrfs-control mcelog random tty14 tty41 ttyS2 bus mem rfkill tty15 tty42 ttyS3 cdrom net root tty16 tty43 urandom cdrom1 network_latency rtc tty17 tty44 usbmon0 cdrw network_throughput rtc0 tty18 tty45 usbmon1 char null scd0 tty19 tty46 usbmon2 console oldmem scd1 tty2 tty47 usbmon3 core parport0 sda tty20 tty48 usbmon4 cpu_dma_latency pktcdvd sda1 tty21 tty49 vcs disk port sda2 tty22 tty5 vcs1 dri ppp sda5 tty23 tty50 vcs2 ecryptfs psaux sg0 tty24 tty51 vcs3 fb0 ptmx sg1 tty25 tty52 vcs4 fd pts sg2 tty26 tty53 vcs5 full ram0 shm tty27 tty54 vcs6 fuse ram1 snapshot tty28 tty55 vcs7 hpet ram10 snd tty29 tty56 vcsa input ram11 sndstat tty3 tty57 vcsa1 kmsg ram12 sr0 tty30 tty58 vcsa2 log ram13 sr1 tty31 tty59 vcsa3 loop0 ram14 stderr tty32 tty6 vcsa4 loop1 ram15 stdin tty33 tty60 vcsa5 loop2 ram2 stdout tty34 tty61 vcsa6 loop3 ram3 tty tty35 tty62 vcsa7 loop4 ram4 tty0 tty36 tty63 vga_arbiter loop5 ram5 tty1 tty37 tty7 zero And here is my fstab file: austin@austinvpn:/proc/scsi$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 /dev/mapper/austinvpn-root / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda1 during installation UUID=ed5520ae-c690-4ce6-881e-3598f299be06 /boot ext2 defaults 0 2 /dev/mapper/austinvpn-swap_1 none swap sw 0 0 Am I missing something/doing something wrong, or is there just no CD in the drive or is the drive possibly broken? Is there any nice command to list devices with mountable media? Thanks in advance for any help!

    Read the article

  • What DX level does my graphics card support? Does it go to 11?

    - by Daniel Moth
    Recently I run into a situation that I have run into quite a few times. Someone encounters a machine and the question arises: "Is there a DirectX 11 card in this machine?". Typically the reason you are interested in that is because cards with DirectX 11 drivers fully support DirectCompute (and by extension C++ AMP) for GPGPU programming. The driver specifically is WDDM (1.1 on Windows 7 and Windows 8 introduces WDDM 1.2 with cool new capabilities). There are many ways for figuring out if you have a DirectX11 card, so here are the approaches that you can use, with a bonus right at the end of the post. Run DxDiag WindowsKey + R, type DxDiag and hit Enter. That is the DirectX diagnostic tool, which unfortunately, only tells you on the "System" tab what is the highest version of DirectX installed on your machine. So if it reports DirectX 11, that doesn't mean you have a DX11 driver! The "Display" tab has a promising "DDI version" label, but unfortunately that doesn't seem to be accurate on the machines I've tested it with (or I may be misinterpreting its use). Either way, this tool is not the one you want for this purpose, although it is good for telling you the WDDM version among other things. Use the Microsoft hardware page There is a Microsoft Windows 7 compatibility center, that lists all hardware (tip: use the advanced search) and you could try and locate your device there… good luck. Use Wikipedia or the hardware vendor's website Use the Wikipedia page for the vendor cards, for both nvidia and amd. Often this information will also be in the specifications for the cards on the IHV site, but is is nice that wikipedia has a single page per vendor that you can search etc. There is a column in the tables for API support where you can see the DirectX version. Check if it is one of these recommended DX11 cards You may not have a DirectX 11 card and are interested in purchasing one. While I am in no position to make recommendations, I will list here some cards from two big IHVs that we know are DirectX 11 capable. Some AMD (aka ATI) cards Low end, inexpensive DX11 hardware: Radeon 5450, 5550, 6450, 6570 Mid range (decent perf, single precision): Radeon 5750, 5770, 6770, 6790 High end (capable of double precision): Radeon 5850, 5870, 6950, 6970 Single precision APUs: AMD E-Series APUs AMD A-Series APUs Some NVIDIA cards Low end, inexpensive DX11 hardware: GeForce GT430, GT 440, GT520, GTS 450 Quadro 400, 600 Mid-range (decent perf, single precision): GeForce GTX 460, GTX 550 Ti, GTX 560, GTX 560 Ti Quadro 2000 High end (capable of double precision): GeForce GTX 480, GTX 570, GTX 580, GTX 590, GTX 595 Quadro 4000, 5000, 6000 Tesla C2050, C2070, C2075 Get the DirectX SDK and run DirectX Caps Viewer Download and install the June 2010 DirectX SDK. As part of that you now have the DirectX Capabilities Viewer utility (find it in your start menu by searching for "DirectX Caps Viewer", the filename is DXCapsViewer.exe). It will list all your devices (emulated, and real hardware ones) under the first node. Expand the hardware entries and then expand again the Direct3D 11 folder. If you see D3D_FEATURE_LEVEL_11_ under that, then your card supports feature level 11 which means it supports DirectCompute and C++ AMP. In the following screenshot of one of my old laptops, the card only goes to feature level 10. Run a utility from the web that just tells you! Of course, writing some C++ AMP code that enumerates accelerators and lists the ones that are capable is trivial. However that requires that you have redistributed the runtime, so a more broadly applicable approach is to use the DX APIs directly to enumerate the DX11 capable cards. That is exactly what the development lead for C++ AMP has done and he describes and shares that utility at this post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Guest blog: A Closer Look at Oracle Price Analytics by Will Hutchinson

    - by Takin Babaei
    Overview:  Price Analytics helps companies understand how much of each sale goes into discounts, special terms, and allowances. This visibility lets sales management see the panoply of discounts and start seeing whether each discount drives desired behavior. In Price Analytics monitors parts of the quote-to-order process, tracking quotes, including the whole price waterfall and seeing which result in orders. The “price waterfall” shows all discounts between list price and “pocket price”. Pocket price is the final price the vendor puts in its pocket after all discounts are taken. The value proposition: Based on benchmarks from leading consultancies and companies I have talked to, where they have studied the effects of discounting and started enforcing what many of them call “discount discipline”, they find they can increase the pocket price by 0.8-3%. Yes, in today’s zero or negative inflation environment, one can, through better monitoring of discounts, collect what amounts to a price rise of a few percent. We are not talking about selling more product, merely about collecting a higher pocket price without decreasing quantities sold. Higher prices fall straight to the bottom line. The best reference I have ever found for understanding this phenomenon comes from an article from the September-October 1992 issue of Harvard Business Review called “Managing Price, Gaining Profit” by Michael Marn and Robert Rosiello of McKinsey & Co. They describe the outsized impact price management has on bottom line performance compared to selling more product or cutting variable or fixed costs. Price Analytics manages what Marn and Rosiello call “transaction pricing”, namely the prices of a given transaction, as opposed to what is on the price list or pricing according to the value received. They make the point that if the vendor does not manage the price waterfall, customers will, to the vendor’s detriment. It also discusses its findings that in companies it studied, there was no correlation between discount levels and any indication of customer value. I urge you to read this article. What Price Analytics does: Price analytics looks at quotes the company issues and tracks them until either the quote is accepted or rejected or it expires. There are prebuilt adapters for EBS and Siebel as well as a universal adapter. The target audience includes pricing analysts, product managers, sales managers, and VP’s of sales, marketing, finance, and sales operations. It tracks how effective discounts have been, the win rate on quotes, how well pricing policies have been followed, customer and product profitability, and customer performance against commitments. It has the concept of price waterfall, the deal lifecycle, and price segmentation built into the product. These help product and sales managers understand their pricing and its effectiveness on driving revenue and profit. They also help understand how terms are adhered to during negotiations. They also help people understand what segments exist and how well they are adhered to. To help your company increase its profits and revenues, I urge you to look at this product. If you have questions, please contact me. Will HutchinsonMaster Principal Sales Consultant – Analytics, Oracle Corp. Will Hutchinson has worked in the business intelligence and data warehousing for over 25 years. He started building data warehouses in 1986 at Metaphor, advancing to running Metaphor UK’s sales consulting area. He also worked in A.T. Kearney’s business intelligence practice for over four years, running projects and providing training to new consultants in the IT practice. He also worked at Informatica and then Siebel, before coming to Oracle with the Siebel acquisition. He became Master Principal Sales Consultant in 2009. He has worked on developing ROI and TCO models for business intelligence for over ten years. Mr. Hutchinson has a BS degree in Chemical Engineering from Princeton University and an MBA in Finance from the University of Chicago.

    Read the article

  • What DX level does my graphics card support? Does it go to 11?

    - by Daniel Moth
    Recently I run into a situation that I have run into quite a few times. Someone encounters a machine and the question arises: "Is there a DirectX 11 card in this machine?". Typically the reason you are interested in that is because cards with DirectX 11 drivers fully support DirectCompute (and by extension C++ AMP) for GPGPU programming. The driver specifically is WDDM (1.1 on Windows 7 and Windows 8 introduces WDDM 1.2 with cool new capabilities). There are many ways for figuring out if you have a DirectX11 card, so here are the approaches that you can use, with a bonus right at the end of the post. Run DxDiag WindowsKey + R, type DxDiag and hit Enter. That is the DirectX diagnostic tool, which unfortunately, only tells you on the "System" tab what is the highest version of DirectX installed on your machine. So if it reports DirectX 11, that doesn't mean you have a DX11 driver! The "Display" tab has a promising "DDI version" label, but unfortunately that doesn't seem to be accurate on the machines I've tested it with (or I may be misinterpreting its use). Either way, this tool is not the one you want for this purpose, although it is good for telling you the WDDM version among other things. Use the Microsoft hardware page There is a Microsoft Windows 7 compatibility center, that lists all hardware (tip: use the advanced search) and you could try and locate your device there… good luck. Use Wikipedia or the hardware vendor's website Use the Wikipedia page for the vendor cards, for both nvidia and amd. Often this information will also be in the specifications for the cards on the IHV site, but is is nice that wikipedia has a single page per vendor that you can search etc. There is a column in the tables for API support where you can see the DirectX version. Check if it is one of these recommended DX11 cards You may not have a DirectX 11 card and are interested in purchasing one. While I am in no position to make recommendations, I will list here some cards from two big IHVs that we know are DirectX 11 capable. Some AMD (aka ATI) cards Low end, inexpensive DX11 hardware: Radeon 5450, 5550, 6450, 6570 Mid range (decent perf, single precision): Radeon 5750, 5770, 6770, 6790 High end (capable of double precision): Radeon 5850, 5870, 6950, 6970 Single precision APUs: AMD E-Series APUs AMD A-Series APUs Some NVIDIA cards Low end, inexpensive DX11 hardware: GeForce GT430, GT 440, GT520, GTS 450 Quadro 400, 600 Mid-range (decent perf, single precision): GeForce GTX 460, GTX 550 Ti, GTX 560, GTX 560 Ti Quadro 2000 High end (capable of double precision): GeForce GTX 480, GTX 570, GTX 580, GTX 590, GTX 595 Quadro 4000, 5000, 6000 Tesla C2050, C2070, C2075 Get the DirectX SDK and run DirectX Caps Viewer Download and install the June 2010 DirectX SDK. As part of that you now have the DirectX Capabilities Viewer utility (find it in your start menu by searching for "DirectX Caps Viewer", the filename is DXCapsViewer.exe). It will list all your devices (emulated, and real hardware ones) under the first node. Expand the hardware entries and then expand again the Direct3D 11 folder. If you see D3D_FEATURE_LEVEL_11_ under that, then your card supports feature level 11 which means it supports DirectCompute and C++ AMP. In the following screenshot of one of my old laptops, the card only goes to feature level 10. Run a utility from the web that just tells you! Of course, writing some C++ AMP code that enumerates accelerators and lists the ones that are capable is trivial. However that requires that you have redistributed the runtime, so a more broadly applicable approach is to use the DX APIs directly to enumerate the DX11 capable cards. That is exactly what the development lead for C++ AMP has done and he describes and shares that utility at this post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Join us on our Journey to be #1 in SaaS!

    - by jessica.ebbelaar(at)oracle.com
    WHY ORACLE? Oracle is a robust organization that has proven to maintain growth and innovation at all levels with a constant evolving attitude. The main ingredient of Oracles success is the 105.000 talented employees who constantly amaze each other in building a better and more innovative organization. Oracle is a company where YOU can make a difference. What is OD? Oracle Direct is a state-of-the-art, multi-channel EMEA sales operation bringing to life the benefits of Oracle’s complete technology stack. It offers you the unique opportunity to work with the most talented and like-minded sales professionals in the industry.  You will have access to world class training and structured career development programmes allowing you to accelerate your Solution Sales career across a multitude of product lines and a choice of attractive locations. What positions are OD Hiring?   Oracle is on a journey to be the #1 SaaS vendor in EMEA.  Due to recent expansion and acquisitions within our Cloud Business, we are now growing our EMEA Cloud Applications Sales Group in Dublin. We have many exciting NEW opportunities across our CRM and HCM SaaS Sales teams. As a SaaS Sales Account Manager, you will proactively manage an assigned territory / vertical with responsibility for the full sales cycle. This role requires strong business development, solution selling, account management and closing skills. WHY ORACLE? Oracle is a robust organization that has proven to maintain growth and innovation at all levels with a constant evolving attitude. The main ingredient of Oracles success is the 105.000 talented employees who constantly amaze each other in building a better and more innovative organization. Oracle is a company where YOU can make a difference. What is OD? Oracle Direct is a state-of-the-art, multi-channel EMEA sales operation bringing to life the benefits of Oracle’s complete technology stack. It offers you the unique opportunity to work with the most talented and like-minded sales professionals in the industry.  You will have access to world class training and structured career development programmes allowing you to accelerate your Solution Sales career across a multitude of product lines and a choice of attractive locations. What positions are OD Hiring? Oracle is on a journey to be the #1 SaaS vendor in EMEA.  Due to recent expansion and acquisitions within our Cloud Business, we are now growing our EMEA Cloud Applications Sales Group in Dublin. We have many exciting NEW opportunities across our CRM and HCM SaaS Sales teams. As a SaaS Sales Account Manager, you will proactively manage an assigned territory / vertical with responsibility for the full sales cycle. This role requires strong business development, solution selling, account management and closing skills. What is the Business Development Group (BDG) The Business Development Group is the key entry point in Oracle for the future Sales and Management talent of the organisation. We are the Demand Generation engine for Oracle in EMEA. We provide revenue generating, quality sales pipeline to our Inside and Field Sales professionals as well as to our Channel Partners. Our current focus is to provide an agile and flexible service offering to our customers and stakeholders to meet ever changing business needs, whilst constantly striving to improve the customer experience, quality of our pipeline, market coverage and penetration. As a SaaS Business Development Consultant (BDC) you will be the first touch point with new customers. Your goal is to proactively identify and qualify business opportunities leading to revenue for Oracle. You will work closely with your Inside Sales colleagues who will progress your qualified pipeline and opportunities. Work for us Work for the only multi-pillar SaaS vendor in the market Be part of a FUN, fast paced and truly International sales team  Develop you solution sales EXPERTISE Drive your CAREER development within a structured and supportive environment The Profile You have a passion for selling cutting-edge technology You thrive in a fast paced and dynamic work environment where being the best is paramount Your priority is always the customer You live for a challenge and you love to win Join us on our Journey to be #1 in SaaS and be part of our Cloud Success Story! You will find more information about open roles here

    Read the article

  • PowerShell: Read Excel to Create Inserts

    - by BuckWoody
    I’m writing a series of articles on how to migrate “departmental” data into SQL Server. I also hold workshops on the entire process – from discovering that the data exists to the modeling process and then how to design the Extract, Transform and Load (ETL) process. Finally I write about (and teach) a few methods on actually moving the data. One of those options is to use PowerShell. There are a lot of ways even with that choice, but the one I show is to read two columns from the spreadsheet and output statements that would insert the data using a stored procedure. Of course, you could re-write this as INSERT statements, out to a text file for bcp, or even use a database connection in the script to move the data directly from Excel into SQL Server. This snippet won’t run on your system, of course – it assumes a Microsoft Office Excel 2007 spreadsheet located at c:\temp called VendorList.xlsx. It looks for a tab in that spreadsheet called Vendors. The statement that does the writing just uses one column: Vendor Code. Here’s the breakdown of what I’m doing: In the first block, I connect to Microsoft Office Excel. That connection string is specific to Excel 2007, so if you need a different version you’ll need to look that up. In the second block I set up a selection from the entire spreadsheet based on that tab. Note that if you’re only after certain data you shouldn’t get the whole spreadsheet – that’s just good practice. In the next block I create the text I want, inserting the Vendor Code field as I go. Finally I close the connection. Enjoy! $ExcelConnection= New-Object -com "ADODB.Connection" $ExcelFile="c:\temp\VendorList.xlsx" $ExcelConnection.Open("Provider=Microsoft.ACE.OLEDB.12.0;` Data Source=$ExcelFile;Extended Properties=Excel 12.0;") $strQuery="Select * from [Vendors$]" $ExcelRecordSet=$ExcelConnection.Execute($strQuery) do { Write-Host "EXEC sp_InsertVendors '" $ExcelRecordSet.Fields.Item("Vendor Code").Value "'" $ExcelRecordSet.MoveNext()} Until ($ExcelRecordSet.EOF) $ExcelConnection.Close() Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • We have our standards, and we need them

    - by Tony Davis
    The presenter suddenly broke off. He was midway through his section on how to apply to the relational database the Continuous Delivery techniques that allowed for rapid-fire rounds of development and refactoring, while always retaining a “production-ready” state. He sighed deeply and then launched into an astonishing diatribe against Database Administrators, much of his frustration directed toward Oracle DBAs, in particular. In broad strokes, he painted the picture of a brave new deployment philosophy being frustratingly shackled by the relational database, and by especially by the attitudes of the guardians of these databases. DBAs, he said, shunned change and “still favored tools I’d have been embarrassed to use in the ’80′s“. DBAs, Oracle DBAs especially, were more attached to their vendor than to their employer, since the former was the primary source of their career longevity and spectacular remuneration. He contended that someone could produce the best IDE or tool in the world for Oracle DBAs and yet none of them would give a stuff, unless it happened to come from the “mother ship”. I sat blinking in astonishment at the speaker’s vehemence, and glanced around nervously. Nobody in the audience disagreed, and a few nodded in assent. Although the primary target of the outburst was the Oracle DBA, it made me wonder. Are we who work with SQL Server, database professionals or merely SQL Server fanbois? Do DBAs, in general, have an image problem? Is it a good career-move to be seen to be holding onto a particular product by the whites of our knuckles, to the exclusion of all else? If we seek a broad, open-minded, knowledge of our chosen technology, the database, and are blessed with merely mortal powers of learning, then we like standards. Vendors of RDBMSs generally don’t conform to standards by instinct, but by customer demand. Microsoft has made great strides to adopt the international SQL Standards, where possible, thanks to considerable lobbying by the community. The implementation of Window functions is a great example. There is still work to do, though. SQL Server, for example, has an unusable version of the Information Schema. One cast-iron rule of any RDBMS is that we must be able to query the metadata using the same language that we use to query the data, i.e. SQL, and we do this by running queries against the INFORMATION_SCHEMA views. Developers who’ve attempted to apply a standard query that works on MySQL, or some other database, but doesn’t produce the expected results on SQL Server are advised to shun the Standards-based approach in favor of the vendor-specific one, using the catalog views. The argument behind this is sound and well-documented, and of course we all use those catalog views, out of necessity. And yet, as database professionals, committed to supporting the best databases for the business, whatever they are now and in the future, surely our heart should sink somewhat when we advocate a vendor specific approach, to a developer struggling with something as simple as writing a guard clause. And when we read messages on the Microsoft documentation informing us that we shouldn’t rely on INFORMATION_SCHEMA to identify reliably the schema of an object, in SQL Server!

    Read the article

  • jqGrid dynamic select option - beforeEditCell not firing

    - by mango
    I'm creating a jqgrid with one drop down column. I need the options of the drop down columns to change dynamically so I thought I can catch the beforeCellEdit event. however it does not seem to be firing. any idea on what i am doing wrong? there is no error, and i did check that i have included the jqgrid edit js files. var lastsel2; jQuery(document).ready(function(){ jQuery("#projectList").jqGrid({ datatype: 'json', url:'projectDrv.jsp', mtype: 'GET', height: 250, colNames:['Node','Proposal #', 'Status', 'Vendor', 'Actions'], colModel :[ {name:'node', index:'node', width:100, editable:false, sortable:false}, {name:'proposal', index:'proposal', width:100, editable:false, resizable:true }, {name:'status', index:'status', width:100, resizable:true, sortable:false, editable:false }, {name:'vendor', index:'vendor', width:100, resizable:true, editable:false, sortable: false }, {name:'actions', index:'actions', width:100, resizable:true, sortable:false, editable: true, edittype:"select" } ], pager: '#pager', rowNum: 10, sortname: 'proposal', sortorder: 'desc', viewrecords: true, onSelectRow: function(id){ if (id && id!==lastsel2){ jQuery('#projectList').jqGrid('restoreRow',lastsel2); jQuery('#projectList').jqGrid('editRow',id,true); lastsel2 = id; } }, beforeEditCell: function(rowid, cellname, value, irow, icol) { alert("before edit here " + rowid); // set editoptions here } });

    Read the article

  • Modify python USB device driver to only use vendor_id and product_id, excluding BCD

    - by Tony
    I'm trying to modify the Android device driver for calibre (an e-book management program) so that it identifies devices by only vendor id and product id, and excludes BCD. The driver is a fairly simply python plugin, and is currently set up to use all three numbers, but apparently, when Android devices use custom Android builds (ie CyanogenMod for the Nexus One), it changes the BCD so calibre stops recognizing it. The current code looks like this, with a simple list of vendor id's, that then have allowed product id's and BCD's with them: VENDOR_ID = { 0x0bb4 : { 0x0c02 : [0x100], 0x0c01 : [0x100]}, 0x22b8 : { 0x41d9 : [0x216]}, 0x18d1 : { 0x4e11 : [0x0100], 0x4e12: [0x0100]}, 0x04e8 : { 0x681d : [0x0222]}, } The line I'm specifically trying to change is: 0x18d1 : { 0x4e11 : [0x0100], 0x4e12: [0x0100]}, Which is, the line for identifying a Nexus One. My N1, running CyanogenMod 5.0.5, has the BCD 0x0226, and rather than just adding it to the list, I'd prefer to eliminate the BCD from the recognition process, so that any device with vendor id 0x18d1 and product id 0x4e11 or 0x4e12 would be recognized. The custom Android rom doesn't change enough for the specifics to matter. The syntax seems to require the BCD in brackets. How can I edit this so that it matches anything in that field?

    Read the article

  • Is an LSA MSV1_0 subauthentication package needed for some impersonation use cases?

    - by Chris Sears
    Greetings, I'm working with a vendor who has implemented some code that uses a Windows LSA MSV1_0 subauthentication package (MSDN info if you're interested: http://msdn.microsoft.com/en-us/library/aa374786(VS.85).aspx ) and I'm trying to figure out if it's necessary. As far as I can tell, the subauthentication routine and filter allow for hooking or customizing the standard LSA MSV1_0 logon event processing. The issue is that I don't understand why the vendor's product would need these capabilities. I've asked them and they said they use it to perform impersonation. The product definitely does need to do impersonation, but based on my limited win32 knowledge, they could get the functionality they need using the normal auth APIs (LsaLogonUser, ImpersonateLoggedOnUser, etc) without the subauthentication package. Furthermore, I've worked with a number of similar products that all do impersonation, and this is the only one that's used a subauthentication package. If you're wondering why I would care, a previous version of the product had a bug in the subauthentication package dll that would cause lockups or bluescreens. That makes me rather nervous and has me questioning the use of such a low-level, kernel sensitive interface. I'd like to go back to the vendor and say "There's no way you could need an LSA subauth package for impersonation - take it out", but I'm not sure I understand the use cases and possible limitations of the standard win32 authentication/impersonation APIs well enough to make that claim definitively. So, to the win32 security gurus out there, is there any reason you would need an LSA MSV1_0 subauthentication package if all you were doing is impersonation? Thanks in advance for any thoughts!

    Read the article

  • Red Box is not working

    - by palani
    Hi , I have install the Red box plugin by using the following command script/plugin install svn://rubyforge.org/var/svn/ambroseplugins/redbox . It installed successfully. and again ran the following command the following location /myapp/vendor/plugin/redbox/rake update_scripts . It shows me the following output (in /myapp/vendor/plugins/redbox) rake aborted! private method `copy' called for File:Class /home/myapp/vendor/plugins/redbox/Rakefile:28 (See full trace by running task with --trace) I don't know How to solve this ... Then i understand that "rake update_scripts" copying the Js and Css file only. so i manually copied the Redbox.js & redbox.css files into the respective places under /public folder I include the follwoing into my application.html.erb <%= stylesheet_link_tag 'redbox' % <%= javascript_include_tag :defaults % <%= javascript_include_tag 'redbox' % It included in the page successfully. The following is my view code : <%= link_to_remote_redbox('Red_box', :url = {:action= 'log'} ,:method ='get') % The popup box doesn't appear. I have no clue what is the exact error. Is that any Jquery clash? Please help me

    Read the article

  • Trying to find USB device on iphone with IOKit.framework

    - by HuGeek
    Hi all, i'm working on a project were i need the usb port to communicate with a external device. I have been looking for exemple on the net (Apple and /developer/IOKit/usb exemple) and trying some other but i can't even find the device. In my code i blocking at the place where the fucntion looks for a next iterator (pointer in fact) with the function getNextIterator but never returns a good value so the code is blocking. By the way i am using toolchain and added IOKit.framework in my project. All i what right now is the communicate or do like a ping to someone on the USB bus!! I blocking in the 'FindDevice'....i can't manage to enter in the while because the variable usbDevice is always = to 0....i have tested my code in a small mac program and it works... Thanks Here is my code : IOReturn ConfigureDevice(IOUSBDeviceInterface **dev) { UInt8 numConfig; IOReturn result; IOUSBConfigurationDescriptorPtr configDesc; //Get the number of configurations result = (*dev)->GetNumberOfConfigurations(dev, &numConfig); if (!numConfig) { return -1; } // Get the configuration descriptor result = (*dev)->GetConfigurationDescriptorPtr(dev, 0, &configDesc); if (result) { NSLog(@"Couldn't get configuration descriptior for index %d (err=%08x)\n", 0, result); return -1; } ifdef OSX_DEBUG NSLog(@"Number of Configurations: %d\n", numConfig); endif // Configure the device result = (*dev)->SetConfiguration(dev, configDesc->bConfigurationValue); if (result) { NSLog(@"Unable to set configuration to value %d (err=%08x)\n", 0, result); return -1; } return kIOReturnSuccess; } IOReturn FindInterfaces(IOUSBDeviceInterface *dev, IOUSBInterfaceInterface **itf) { IOReturn kr; IOUSBFindInterfaceRequest request; io_iterator_t iterator; io_service_t usbInterface; IOUSBInterfaceInterface **intf = NULL; IOCFPlugInInterface **plugInInterface = NULL; HRESULT res; SInt32 score; UInt8 intfClass; UInt8 intfSubClass; UInt8 intfNumEndpoints; int pipeRef; CFRunLoopSourceRef runLoopSource; NSLog(@"Debut FindInterfaces \n"); request.bInterfaceClass = kIOUSBFindInterfaceDontCare; request.bInterfaceSubClass = kIOUSBFindInterfaceDontCare; request.bInterfaceProtocol = kIOUSBFindInterfaceDontCare; request.bAlternateSetting = kIOUSBFindInterfaceDontCare; kr = (*dev)->CreateInterfaceIterator(dev, &request, &iterator); usbInterface = IOIteratorNext(iterator); IOObjectRelease(iterator); NSLog(@"Interface found.\n"); kr = IOCreatePlugInInterfaceForService(usbInterface, kIOUSBInterfaceUserClientTypeID, kIOCFPlugInInterfaceID, &plugInInterface, &score); kr = IOObjectRelease(usbInterface); // done with the usbInterface object now that I have the plugin if ((kIOReturnSuccess != kr) || !plugInInterface) { NSLog(@"unable to create a plugin (%08x)\n", kr); return -1; } // I have the interface plugin. I need the interface interface res = (*plugInInterface)->QueryInterface(plugInInterface, CFUUIDGetUUIDBytes(kIOUSBInterfaceInterfaceID), (LPVOID*) &intf); (*plugInInterface)->Release(plugInInterface); // done with this if (res || !intf) { NSLog(@"couldn't create an IOUSBInterfaceInterface (%08x)\n", (int) res); return -1; } // Now open the interface. This will cause the pipes to be instantiated that are // associated with the endpoints defined in the interface descriptor. kr = (*intf)->USBInterfaceOpen(intf); if (kIOReturnSuccess != kr) { NSLog(@"unable to open interface (%08x)\n", kr); (void) (*intf)->Release(intf); return -1; } kr = (*intf)->CreateInterfaceAsyncEventSource(intf, &runLoopSource); if (kIOReturnSuccess != kr) { NSLog(@"unable to create async event source (%08x)\n", kr); (void) (*intf)->USBInterfaceClose(intf); (void) (*intf)->Release(intf); return -1; } CFRunLoopAddSource(CFRunLoopGetCurrent(), runLoopSource, kCFRunLoopDefaultMode); if (!intf) { NSLog(@"Interface is NULL!\n"); } else { *itf = intf; } NSLog(@"End of FindInterface \n \n"); return kr; } unsigned int FindDevice(void *refCon, io_iterator_t iterator) { kern_return_t kr; io_service_t usbDevice; IOCFPlugInInterface **plugInInterface = NULL; HRESULT result; SInt32 score; UInt16 vendor; UInt16 product; UInt16 release; unsigned int count = 0; NSLog(@"Searching Device....\n"); while (usbDevice = IOIteratorNext(iterator)) { // create intermediate plug-in NSLog(@"Found a device!\n"); kr = IOCreatePlugInInterfaceForService(usbDevice, kIOUSBDeviceUserClientTypeID, kIOCFPlugInInterfaceID, &plugInInterface, &score); kr = IOObjectRelease(usbDevice); if ((kIOReturnSuccess != kr) || !plugInInterface) { NSLog(@"Unable to create a plug-in (%08x)\n", kr); continue; } // Now create the device interface result = (*plugInInterface)->QueryInterface(plugInInterface, CFUUIDGetUUIDBytes(kIOUSBDeviceInterfaceID), (LPVOID)&dev); // Don't need intermediate Plug-In Interface (*plugInInterface)->Release(plugInInterface); if (result || !dev) { NSLog(@"Couldn't create a device interface (%08x)\n", (int)result); continue; } // check these values for confirmation kr = (*dev)->GetDeviceVendor(dev, &vendor); kr = (*dev)->GetDeviceProduct(dev, &product); //kr = (*dev)->GetDeviceReleaseNumber(dev, &release); //if ((vendor != LegoUSBVendorID) || (product != LegoUSBProductID) || (release != LegoUSBRelease)) { if ((vendor != LegoUSBVendorID) || (product != LegoUSBProductID)) { NSLog(@"Found unwanted device (vendor = %d != %d, product = %d != %d, release = %d)\n", vendor, kUSBVendorID, product, LegoUSBProductID, release); (void) (*dev)-Release(dev); continue; } // Open the device to change its state kr = (*dev)->USBDeviceOpen(dev); if (kr == kIOReturnSuccess) { count++; } else { NSLog(@"Unable to open device: %08x\n", kr); (void) (*dev)->Release(dev); continue; } // Configure device kr = ConfigureDevice(dev); if (kr != kIOReturnSuccess) { NSLog(@"Unable to configure device: %08x\n", kr); (void) (*dev)->USBDeviceClose(dev); (void) (*dev)->Release(dev); continue; } break; } return count; } // USB rcx Init IOUSBInterfaceInterface** osx_usb_rcx_init (void) { CFMutableDictionaryRef matchingDict; kern_return_t result; IOUSBInterfaceInterface **intf = NULL; unsigned int device_count = 0; // Create master handler result = IOMasterPort(MACH_PORT_NULL, &gMasterPort); if (result || !gMasterPort) { NSLog(@"ERR: Couldn't create master I/O Kit port(%08x)\n", result); return NULL; } else { NSLog(@"Created Master Port.\n"); NSLog(@"Master port 0x:08X \n \n", gMasterPort); } // Set up the matching dictionary for class IOUSBDevice and its subclasses matchingDict = IOServiceMatching(kIOUSBDeviceClassName); if (!matchingDict) { NSLog(@"Couldn't create a USB matching dictionary \n"); mach_port_deallocate(mach_task_self(), gMasterPort); return NULL; } else { NSLog(@"USB matching dictionary : %08X \n", matchingDict); } CFDictionarySetValue(matchingDict, CFSTR(kUSBVendorID), CFNumberCreate(kCFAllocatorDefault, kCFNumberShortType, &LegoUSBVendorID)); CFDictionarySetValue(matchingDict, CFSTR(kUSBProductID), CFNumberCreate(kCFAllocatorDefault, kCFNumberShortType, &LegoUSBProductID)); result = IOServiceGetMatchingServices(gMasterPort, matchingDict, &gRawAddedIter); matchingDict = 0; // this was consumed by the above call // Iterate over matching devices to access already present devices NSLog(@"RawAddedIter : 0x:%08X \n", &gRawAddedIter); device_count = FindDevice(NULL, gRawAddedIter); if (device_count == 1) { result = FindInterfaces(dev, &intf); if (kIOReturnSuccess != result) { NSLog(@"unable to find interfaces on device: %08x\n", result); (*dev)-USBDeviceClose(dev); (*dev)-Release(dev); return NULL; } // osx_usb_rcx_wakeup(intf); return intf; } else if (device_count 1) { NSLog(@"too many matching devices (%d) !\n", device_count); } else { NSLog(@"no matching devices found\n"); } return NULL; } int main(int argc, char *argv[]) { int returnCode; NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; NSLog(@"Debut du programme \n \n"); osx_usb_rcx_init(); NSLog(@"Fin du programme \n \n"); return 0; // returnCode = UIApplicationMain(argc, argv, @"Untitled1App", @"Untitled1App"); // [pool release]; // return returnCode; }

    Read the article

  • TCL TDom: Looping through Objects

    - by pws5068
    Using TDom, I would like to cycle through a list of objects in the following format: <object> <type>Hardware</type> <name>System Name</name> <description>Basic Description of System.</description> <attributes> <vendor>Dell</vendor> <contract>MM/DD/YY</contract> <supportExpiration>MM/DD/YY</supportExpiration> <location>Building 123</location> <serial>xxx-xxx-xxxx</serial> <mac>some-mac-address</mac> </attributes> </object> <object> <type>Software</type> <name>Second Object</name> ... Then I use TDom to make a list of objects: set dom [dom parse $xml] set doc [$dom documentElement] set nodeList [$doc selectNodes /systems/object] So far I've done this to (theoretically) select every "Object" node from the list. How can I loop through them? Is it just: foreach node $nodeList { For each object, I need to retrieve the association of each attribute. From the example, I need to remember that the "name" is "System Name", "vendor" is "Dell", etc. I'm new to TCL but in other languages I would use an object or an associative list to store these. Is this possible? Can you show me an example of the syntax to select an attribute in this manner?

    Read the article

  • Posting XML form data to a RESTful Server with Javascript or PHP

    - by pjs-worker
    Hi folks, I've been given the task of posting to a RESTful server. I'm new to the official "REST" but I've played with the concept before. However, this time I have an XML Payload example file that I am supposed to post. I'm struggling to figure out how the two relate. Can you help? Right now I can post to a specific site, say www.pcpost.com/schema/Application I can generate the URL for the inital, ie: postApplication?userid=4&... Being relatively new to web programming, I find that don't know how to take the following and interface it with the server. I'm at least familiar with Javascript and PHP. If this is impossible with those two types, I can learn whatever would be best. Thanks for your help on this. C <?xml version=\"1.0\" ?> <Application xmlns="http://www.pcpost.com/schema/Application" SchemaVersion="1.0" ProgramId="8" ApplicationDate="2009-08-29"> <Vendors> <Vendor Role="Applicant" Company="Test Company" Contact="Smith, John"/> <Vendor Role="Seller" Company="Test Company" Contact="Doe, Jane"/> <Vendor Role="Installer" Company="Test Company" Contact="Funk, Carl"/> </Vendors> <Participants> <Participant TaxStatus="Individual" Sector="Commercial"> <Roles> <Role>Host Customer</Role> </Roles> </Participants> </Application>

    Read the article

  • Rails - strip xml import from whitespace and line break

    - by val_to_many
    Hey folks, I am stuck with something quite simple but really annoying: I have an xml file with one node, where the content includes line breaks and whitspaces. Sadly I can't change the xml. <?xml version="1.0" encoding="utf-8" ?> <ProductFeed> ACME Ltd. Fooproduct Foo Root :: Bar Category I get to the node and can read from it without trouble: url = "http://feeds.somefeed/feed.xml.gz" @source = open((url), :http_basic_authentication=>["USER", "PW"]) @gz = Zlib::GzipReader.new(@source) @result = @gz.read @doc = Nokogiri::XML(@result) @doc.xpath("/ProductFeed/Vendors/Vendor").each do |manuf| vendor = manuf.css("Name").first.text manuf.xpath("//child::Product").each do |product| product_name = product.css("Name").text foocat = product.css("Category").text puts "#{vendor} ---- #{product_name} ---- #{foocat} " end end This results in: ACME Ltd. ---- Fooproduct ---- Foo Root :: Bar Category Obviously there are line breaks and tab stops or spaces in the string returned by product.css("Category").text. Does anyone know how to strip the result from linebreaks and taps or spaces right here? Alternatively I could do that in the next step, where I do a find on 'foocat' like barcat = Category.find_by_foocat(foocat) Thanks for helping! Val

    Read the article

  • Have I found a security problem in an API or do I just not understand SSL?

    - by jamieb
    I'm working on building a set of Python bindings around an XML-based API provided by a vendor. The vendor requires that all transactions be conducted over SSL. Using a Linux box, I created a key file and a CSR for my application. Using their self-service web portal, I then generate a certificate using that CSR. Both the key file and the certificate are used when making the SSL request to the API. I'm now working on designing exception classes to make error messages more verbose (and, hopefully, more useful to developers using my bindings). Part of my testing has included altering the key file: transpose a couple characters here, replace 4 or 5 with random characters there, etc. To my surprise, altering the key file had no effect! As long as I didn't change the total length of it, the API didn't complain about a bad key file. The only way I was able to throw an error was by swapping in a completely different key from another application. At that point, the API complained about the Common Name not matching. Is this normal behavior or has the vendor not properly implemented SSL?

    Read the article

  • Unit Testing a rails 2.3.5 plugin

    - by brad
    I'm writing a new plugin for a rails 2.3.5 app. I've included an app directory (which makes it an engine) so i can easily load some extra routes. Not sure if that affects anything. Anyway, in the test directory i have two files: test_helper.rb and my_plugin_test.rb These files were generated automatically using script/generate plugin my_plugin When I go to vendor/plugins/my_plugin directory and run rake test they don't seem to run. I get the following console output: (in /Users/me/Repos/my_app/source/trunk/vendor/plugins/my_plugin) /Users/me/.rvm/rubies/jruby-1.4.0/bin/jruby -I"lib:lib:test" "/Users/me/.rvm/gems/jruby-1.4.0/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/my_plugin_test.rb" So it obviously sees my test file, but none of the tests inside get run, I just get back to my console prompt. What am I missing here? I figured the generated code would work out of the box Here are the two files test_helper.rb require 'rubygems' require 'active_support' require 'active_support/test_case' my_plugin_test.rb require 'test_helper' class MyPluginTest < ActiveSupport::TestCase # Replace this with your real tests. test "the truth" do assert true end test "Factories are supported" do assert_not_nil Factory end end File structure vendor - plugins - my_plugin - app - config - routes.rb - generators - my_plugin - some generator files.rb - lib - my_plugin.rb - my_plugin - my_plugin_lib_file.rb - rails - init.rb - Rakefile - tasks - my_plugin_tasks.rake - test - test_helper.rb - my_plugin_test.rb

    Read the article

  • Correct way to generate order numbers in SQL Server

    - by Anton Gogolev
    This question certainly applies to a much broader scope, but here it is. I have a basic ecommerce app, where users can, naturally enough, place orders. Said orders need to have a unique number, which I'm trying to generate right now. Each order is Vendor-specific. Basically, I have an OrderNumberInfo (VendorID, OrderNumber) table. Now whenever a customer places an order I need to increment OrderNumber for a particuar Vendor and return that value. Naturally, I don't want other processes to interfere with me, so I need to exclusively lock this row somehow: begin tranaction declare @n int select @n = OrderNumber from OrderNumberInfo where VendorID = @vendorID update OrderNumberInfo set OrderNumber = @n + 1 where OrderNumber = @n and VendorID = @vendorID commit transaction Now, I've read about select ... with (updlock rowlock), pessimistic locking, etc., but just cannot fit all this in a coherent picture: How do these hints play with SQL Server 2008s' snapshot isolation? Do they perform row-level, page-level or even table-level locks? How does this tolerate multiple users trying to generate numbers for a single Vendor? What isolation levels are appropriate here? And generally - what is the way to do such things?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >