Search Results

Search found 25534 results on 1022 pages for 'write powershell'.

Page 542/1022 | < Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >

  • Is there a way to have customised text instead of subject in Thunderbird's inbox list?

    - by peterp
    I am getting a lot of informational emails like "You've got a new message from ..." or "Notification of Donation Received", which often do not contain any information in the subject so that I have to open the email to see who sent the message or who donated which amount. I'd love to be able to make TB parse incoming emails and then display something interesting instead of the original subject, e.g. by defining a regular expression pattern. I know how to write regular expressions, but I do not know whether there is a way or an addon to modify the displayed text in the messages view. EDIT for clarification: I would like donation notifications from Paypal not to be displayed as original Notification of Donation Received but rather Paypal: John Doe has donated 50$

    Read the article

  • Java Applet or Unity3D for Cross-Platform 3D Surveying App

    - by Jake M
    Do you think a Java Applet or Unity3D Application is the best option to make a cross-browser 3d web-app? I intend to make a web application that displays 3d environments that can be navigated by dragging(with a finger or mouse depending on the platform). The web app will render 3d environments of development sites including contours, water pipeline locations, buildings etc. The application must work on Windows Desktop, Android, iOS and Windows Phone. So this is why I am tending towards a web-app as opposed to cross-platform smart phone library(like Mosync or Marmalade). The 3d environments will be navigable(by dragging around) and contain simple(not detailed) 3d objects like buildings, mountains, pipelines, etc. One thing I know is that WebGL is out because it doesn't work on IE and has limited support on Smart Phones(am I correct to completely disregard WebGL?). Will future Smart Phone browsers continue to support Java Applets? Also is it really true I can write ONE Application/Game in Unity3D and simply compile it to run on Windows Windows, Mac, Xbox 360, PlayStation 3, Wii, iPad, iPhone and Android? Would you suggest the Unity3D application path or the Unity3D Web Player path? Concerning Unity3D, there's one thing I am unsure about: do all Unity3D features work on iOS and Android?

    Read the article

  • Which software to keep track of my project?

    - by Exa
    I'm about to start the first real phase of my game development which will consist of the acquisition of information, resources and the definition of where I want to go and what I will need for that. I just want to make sure that I'm prepared as best as possible before I actually start development. I don't like the thought of using Microsoft Word or Excel for my project management... I already worked with MS Project but I don't think it fits my needs. I need a software where I can easily maintain project steps, milestones, important issues, information about technologies and engines I use, as well as simple notes and thoughts I just want to write down. I usually prefer a whiteboard for stuff like that but unfortunately it's not a persistent way of storing. ;) Also writing it down the old-school way is something I can think of, but only for quick notes... Which software do you use for that? Are there commonly used programs? Is there any free software at all?

    Read the article

  • Introduction to the ADF Debugger

    - by Shay Shmeltzer
    Not that you'll ever need this blog entry - after all there are never bugs in the code that YOU write. But maybe one day one of your peers will ask you for help debugging their ADF application so here we go... One of the cool features of JDeveloper and ADF is the ADF Debugger - a way to debug the declarative pars of Oracle ADF. The debugger goes beyond your regular Java debugger and shows you in a clear way specific information related to Oracle ADF - things like where are you in the taskflow/region hierarchy, what is in your various scopes, what is the value of a specific EL and much more. However, from the number of posts on OTN where people are saying "I placed a System.out.println() to see what the value was...", it seems that not many are familiar with the power of the debugger. So here is a short demo that shows you some aspects of the debugger such as: Setting breakpoints on various ADF artifacts The ADF structure window The ADF Data window The EL Evaluater window Want to learn more about debugging ADF applications - I highly recommend that you go back in time to 2009 and attend Steve Muench's OOW presentation about ADF debugging. Can't travel in time yet? Then the second best option is to look at his very clear ADF Debugging Slides, which were the inspiration to the above demo.

    Read the article

  • Automatic Site Creation

    - by Eddy Freeman
    I have created a platform that i want users to sign up and a new site is created for them instantly. Users will register and afterwards they will receive a username and password for their personal site. The system will work like ecommerce platforms such as www.shopify.com, www.bigcommerce.com, etc.. where users sign up and a new web-shop is created for them instantly. I have been searching for a while how i can create a script to automate this task but couldn't find any tutorials. Am using LAMP (Linux, Apache, MySql and PHP). Am asking if someone can guide me how to write such script or maybe point me to a tutorial or a book or probably similar scripts to automate this task. Sorry if this place is wrong for this question. thanks for your help.

    Read the article

  • Recommended Setup

    - by Chris Ryan
    I have been running into issue with my MSSQL Database setup with speed. Here is my scenario. About 100M Rows Average: 1k Updates Per Second Hard Drives: RAID 10 SSD MDF --Active Time: 0 Log Drives: 1 SSD LDF - Simple Recovery --Active Time 99.9 --Queue: 8 I do not need a back up of the log so it is set to simple recovery but my bottleneck is still at my log. I get high WAITLOG times and thus it can not update any faster. I can't do bulk updates/transactions and each update needs to be one at a time. Is my only option to increase write performance of the log drives, add a RAID drives? Any suggestions on increasing the performance?

    Read the article

  • Paranoid Encryption

    - by Lord Jaguar
    Call me paranoid, but I really like to keep my stuff secret, but readily available on the cloud. So, asking this question. How safe and reliable is encryption software (e.g., truecrypt)? The reason I ask is that, what is I encrypt my data today with this software and after a couple of years, the software is gone ! What happens to my encrypted data? Is it equally safe to AES encrypt using 7-zip? Will it provide the same level or equivalent level of encryption as truecrypt or other encryption software? (I agree truecrypt will be better because of the container encryption it gives.) And what happens if 7-zip shuts down after 5 years? I am sorry if I am sounding paranoid, but I am coming back to my original question... Is there any application/software independent encryption? Meaning, can I encrypt with one software and decrypt with another so that I will not be dependent on just one vendor? I want my encryption to depend ONLY on the password and NOT on the encryption program/software? The next question, can I write my own program that does AES/stronger encryption when I give it a passphrase, so that I don't need to depend on third party software for encryption? If yes, which language supports the same? Can someone give me a heads up as to where to look for in case of writing my own encryption program?

    Read the article

  • Windows Despooler Requires Administrator to Print

    - by Software Monkey
    Does anyone know what changes I might need to make to allow restricted users to print using a printer configured for spooling? My Windows XP SP3 system currently requires me to use an Admin account for printing if the print is configured to spool documents before printing. If the printer is configured for direct printing it works for all accounts. This used to work and some months back it just stopped, and I can't pin down why. The printer itself is configured for all uses to have complete authority. My system is locked down for restricted users given them only read authority to the entire file system except their data directories, which is how I have run my systems for years. I assume there may be a directory somewhere that I need to allow users to write to.

    Read the article

  • Backup script that excludes large files using Duplicity and Amazon S3

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. My script gives the proper command, but when run within the script it outputs an an error. However if the same command is run manually everything works...??? Here is the script based on one easy found with google #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="7743E14E" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= If run I recieve the error; Command line error: Expected 2 args, got 6 Enter 'duplicity --help' for help screen. Any help your could offer would be greatly appreciated.

    Read the article

  • Executing NUnit Tests using the Visual Studio 2012 Test Runner

    - by David Paquette
    At a recent Visual Studio 2012 event at the Calgary .NET User Group, I was told that I could run my NUnit tests directly in the Visual Studio 2012 without any special plugins.  Naturally, I was very excited and I immediately tried running my NUnit tests. I was somewhat disappointed to see that the Test Runner did not discover any of my NUnit tests.  Apparently, you do still need to install an extension that supports NUnit.  Microsoft has completely re-written the Test Runner in Visual Studio 2012 and opened it up for anyone to write Test Adapters for any unit test framework (not just MSTest).  Once the correct test adapters are installed, everything works great.  Luckily, there are a good number of adapters already written. Here are some Test Adapters that you might find useful: NUnit Test Adapter – This one is still in beta, but tit does work with the official Visual Studio 2012 release xUnit.net Test Adapter Silverlight Unit Test Adapter Chutzpah Test Adapter Overall, I still prefer the unit test runner in ReSharper, but this is a great new feature for those who might not have a ReSharper license.

    Read the article

  • MacOS X 10.6 Portable Home Directory sync fails due to FileSync agent crashing

    - by tegbains
    On one of our cleanly installed MacPro machines running MacOS X 10.6.6 connected to our MacOS X 10.6.6 Server, syncing data using Portable Home Directories fails. It seems to be due to the filesync agent crashing during the home sync. We get -41 and -8026 errors, which we are suspecting are indicating that there is too much data or filesync agent can't read the files. The user is the owner of the files and can read/write to all of the files. < Logout 0:: [11/02/04 13:10:42.751] Error -41 copying /Volumes/RCAUsers/earlpeng/Library/Mail/Mailboxes/email from old imac./Attachments/12081/2.2. (source = NO) < Logout 0:: [11/02/04 13:10:42.758] Error -8062 copying /Volumes/RCAUsers/earlpeng/Library/Mail/Mailboxes/email from old imac./Attachments/12081/2.2/[email protected]. (source = NO) < Logout 1:: [11/02/04 13:10:42.758] -[DeepCopyContext deepCopyError:sourceError:sourceRef:]: error = -8062, wasSource = NO: return shouldContinue = NO

    Read the article

  • Redirecting to a diferent exe for download based on user agent

    - by Ra
    I own a Linux-Apache site where I host exe files for download. Now, when a user clicks this link to my site (published on another site): http://mysite.com/downloads/file.exe I need to dynamically check their user agent and redirect them to either http://mysite.com/downloads/file-1.exe or http://mysite.com/downloads/file-2.exe It seems to me that I have to options: Put a .htaccess file stating that .exe files should be considered to be scripts. Then write a script that checks the user agent and redirects to a real exe placed in another folder. Call this script file.exe. Use Apache mod-rewrite to point file.exe to redirect.php. Which of these is better? Any other considerations? Thanks.

    Read the article

  • Restful WebAPI VS Regular Controllers

    - by Rohan Büchner
    I'm doing some R&D on what seems like a very confusing topic, I've also read quite a few of the other SO questions, but I feel my question might be unique enough to warrant me asking. We've never developed an app using pure WebAPI. We're trying to write a SPA style app, where the back end is fully decoupled from the front end code Assuming our service does not know anything about who is accessing/consuming it: WebAPI seems like the logical route to serve data, as opposed to using the standard MVC controllers, and serving our data via an action result and converting it to JSON. This to me at least seems like an MC design... which seems odd, and not what MVC was meant for. (look mom... no view) What would be considered normal convention in terms of performing action(y) calls? My sense is that my understanding of WebAPI is incorrect. The way I perceive WebAPI, is that its meant to be used in a CRUD sense, but what if I want to do something like: "InitialiseMonthEndPayment".... Would I need to create a WebAPI controller, called InitialiseMonthEndPaymentController, and then perform a POST... Seems a bit weird, as opposed to a MVC controller where i can just add a new action on the MonthEnd controller called InitialisePayment. Or does this require a mindset shift in terms of design? Any further links on this topic will be really useful, as my fear is we implement something that might be weird an could turn into a coding/maintenance concern later on?

    Read the article

  • What to do when you're the interviewer and you don't like your job?

    - by emcb
    I'm in a sorta strange predicament, and I could use some advice. When I was interviewing for my current job, the job description I was given seemed pretty darn nice to me. Without going into the details, the job hasn't quite turned out the way it was advertised. The company is great and takes care of its employees, but for someone who cares about the code they write and the work they do, it's a bad environment - effectively, we operate between 0.5 and 1.0 on the Joel test, and due to political issues we're not going to move beyond that any time soon. Bitter? Maybe. OK...so I'm in the market for a new job. But that's not where my dilemma is. The problem that I see coming is that I will be participating in interviewing some candidates for a position on my team, and I'm not sure what to do. I've heard through the grapevine that we have some really solid, promising, fresh-out-of-college prospects coming in to interview, and I honestly dread the thought of somebody having their first experience of engineering in this department. So I'm wondering: what should I do if/when the interviewee asks me "Do you like your job?" (no) "What kind of projects would I be working on?" (mostly static HTML/CSS changes) Anything else that would elicit a negative answer if told truthfully Do I tell the truth, to give the candidate a real picture of the job? What if this scares them away, and what if it gets blamed on me? Do I fib or lie, saying we work on exciting projects with lots of flexibility, like the pitch my boss will give when the reality is quite different? Should I feel any kind of moral responsibility to let a promising young developer know that this isn't the job for them, or should I shut up and be loyal 100% to the company? Any approaches or advice is appreciated. I hope I don't come across as overly dramatic - I honestly struggle with this question.

    Read the article

  • learn the programming language for computing functions about integers

    - by asd
    Hi I know something about Pascal, Mathematica and Matlab, but I dont have any idea about C,C++,C# languages. I want to learn one of the languages that they they are fast and exact to compute some arithmetic functions for large numbers(for example larger than $10^3000$). I asked somebody and he said he used C++ and he said I computed this sequence in less than 10 min. I want to know C, C++, C# and visual kind of theses programs and know which is better for my goal. Let $f$ be an arithmetic function and A={k1,k2,...,kn} are integers in increasing order. Now I want to start with k1 and compare f(ki) with f(k1). If f(ki)f(k1), put ki as k1. Now start with ki, and compare f(kj) with f(ki), for ji. If f(kj)f(ki), put kj as ki, and repeat this procedure. At the end we will have a sub sequence B={L1,...,Lm} of A by this property: f(L(i+1))f(L(i)), for any 1<=i<=m-1 I have written a code for this program with Mathematica, and it take some hours to compute f of ki's or the set B for large numbers. For example, let f is the divisor function of integers. Do you know how to write the code for my purpose in Mathematica or Matlab. Mathematica is preferable.

    Read the article

  • How to get that first development job

    - by cju
    I have been in QA for 10 years, trying to get into developement for about 5 of them. I have taken classes in C++, Java and C#. I was able to write some tools and unit tests in C# at my current job and (by all accounts) did a good job of it. However, 8 months ago, my employer tasked me with the responsibility of establishing the new QA group. Now, I'm doing manual testing and deployment with no promise of returning to development. I have looked at the job boards and there are a lot of jobs for Web developers and wondered how I could break into that. I've picked up some books on Ruby on Rails that I plan to work through on the Mac at home, but I'm not sure employers would be interested in anything but commercial web development. Do you have any suggestions on how I can use my experience to get a job as a junior developer? And I mean one that entailes programming...the postings I've seen for junior developer amount to doing all the grunt work besides coding. They should just call them "Technical Secretaries".

    Read the article

  • Logging library for (c++) games

    - by Klaim
    I know a lot of logging libraries but didn't test a lot of them. (GoogleLog, Pantheios, the coming boost::log library...) In games, especially in remote multiplayer and multithreaded games, logging is vital to debugging, even if you remove all logs in the end. Let's say I'm making a PC game (not console) that needs logs (multiplayer and multithreaded and/or multiprocess) and I have good reasons for looking for a library for logging (like, I don't have time or I'm not confident in my ability to write one correctly for my case). Assuming that I need : performance ease of use (allow streaming or formating or something like that) reliable (don't leak or crash!) cross-platform (at least Windows, MacOSX, Linux/Ubuntu) Wich logging library would you recommand? Currently, I think that boost::log is the most flexible one (you can even log to remotely!), but have not good performance. Pantheios is often cited but I don't have comparison points on performance and usage. I've used my own lib for a long time but I know it don't manage multithreading so it's a big problem, even if it's fast enough. Google Log seems interesting, I just need to test it but if you already have compared those libs and more, your advice might be of good use. Games are often performance demanding while complex to debug so it would be good to know logging libraries that, in our specific case, have clear advantages.

    Read the article

  • Need to get a list of all users within a subnet of servers

    - by mikedopp
    I am looking to write a batch or vbs script to gather all users (local to the server. ie. administrators or a local account(not ad users)) on a collection of servers inside my network. I assume I could do this by subnet. Could even put the server names into a csv text file for the script to read from and report back to. Lots to ask. I would use net user however I run into local access only. Ideas? Or too many security walls to work?

    Read the article

  • Why would 70-persistent-net.rules have no effect?

    - by Wes Felter
    I've got a saucy server with a lot of NICs and they end up with weird names like "rename19". I know interface names can be changed by modifying the /etc/udev/rules.d/70-persistent-net.rules file. The first clue that something is wrong is that that file did not exist even though it's supposed to be created automatically. So I decided to write my own based on advice from Linux From Scratch: ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.0", NAME="eth0" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.1", NAME="eth1" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.2", NAME="eth2" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.3", NAME="eth3" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:0c:00.0", NAME="mezz0" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:0c:00.1", NAME="mezz1" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:1b:00.0", NAME="slot1a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:1b:00.1", NAME="slot1b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:20:00.0", NAME="slot2a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:20:00.1", NAME="slot2b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:11:00.0", NAME="slot3a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:11:00.1", NAME="slot3b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:8b:00.0", NAME="slot4a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:8b:00.1", NAME="slot4b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:90:00.0", NAME="slot5a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:90:00.1", NAME="slot5b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:95:00.0", NAME="slot6a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:95:00.1", NAME="slot6b" (I'm matching on PCI IDs instead of MAC addresses because I have multiple identical machines that I want to apply this configuration to.) After rebooting, nothing has changed. It's like these rules aren't even being read. There's not much going on in dmesg either: $ dmesg | grep udev [ 3.196629] systemd-udevd[323]: starting version 204 [ 6.719140] systemd-udevd[550]: starting version 204 [ 38.695050] init: udev-fallback-graphics main process (1658) terminated with status 1

    Read the article

  • Send less Server Data with "AFK"

    - by Oliver Schöning
    I am working on a 2D (Realtime) MultiPlayer Game. With Construct2 and a Socket.IO JavaScript Server. Right now the code does not include the Array for each Player. var io = require("socket.io").listen(80); var x = 10; io.sockets.on("connection", function (socket) { socket.on("message", function(data) { x = x+1; }); }); setInterval(function() { io.sockets.emit("message", 'Pos,' + x); },100); I noticed a very annoying problem with my server today. It sends my X Coordinates every 100 milliseconds. The Problem was, that when I went into another Browser Tab, the Browser stopped the Game from running. And when I went back, I think the Game had to run through all the packages. Because my Offline Debugging Button still worked immediately and the Online Button only responded after some seconds. So then I changed my Code so that it would only send out an update when it received a player Input: var io = require("socket.io").listen(80); var x = 10; io.sockets.on("connection", function (socket) { socket.on("message", function(data) { x = x+1; io.sockets.emit("message", 'Pos,' + x); }); }); And it Updated Immediately, even when I had been inactive on the Browser Tab for a long time. Confirming my suspicion that it had to get through all the data. Confirm Please! It would be insane to only send information on Client Input in a Real Time Game. But how would I write a AFK function? I would think it is easier to run a AFK Boolean Loop on the Server. Here is what I need help for: playerArray[Me] if ( "Not Given any Input for X amount of Seconds" ) { "Don't send Data" } else { "Send Data" }

    Read the article

  • Efficient coding in Visual Studio (or another IDE), with touch typing

    - by cheeesus
    Moving the cursor to another position in code is one of the most frequent actions when coding. I don't write my programs from the beginning to the end, like a letter. However, moving the cursor requires me to move my right hand to the key arrows or to the mouse, which feels like an interruption to my writing rhythm, since I'm using touch typing. I want my hands to rest on the keyboard. It's difficult to explain what I mean, but I think every coder using touch typing knows what I mean. I tried many things, like defining some shortcuts as surrogate arrow keys (Shift+Alt+J, K, L, I), or buying a keyboard with a Trackpoint, Trackpad, or Trackball on it, but I have not yet found a satisfying solution to the problem. What is the best solution you know of, regardless of which IDE you use? Edit: Thank you for your answers. I am using a lot of shortkeys, but I think using a Vim plugin in Visual Studio would interfere too much with the shortkeys I am used to. Also, I have a keyboard with a built-in mouse, but I'm still looking for a better solution.

    Read the article

  • Is TrueCrypt robust against data corruption?

    - by Dimitri C.
    I would expect a TrueCrypt volume to be fragile when it suffers from data corruption. This could happen for example because the hard disk, CD or DVD start to deteriorate, or when an USB stick is unplugged while a write is in progress. On the TrueCrypt FAQ it is mentioned that this problem is limited because the data is encrypted in blocks of 16 bytes. However, I'dd like to know if this really so in practice. Is there anyone who has experienced severe data loss due to only small corruptions?

    Read the article

  • Devoxx 2011 Started Today

    - by Yolande
    Devoxx 2011, organized by Java user group in Belgium, is the biggest Java conference in Europe. The first two University Days set the tone for the weeklong conference with its in-depth technical sessions lead by luminaries from the Java community and industry experts. Each day is a great mix of 3 hour sessions and hands-on labs, 30 minute Tools-in-Action sessions giving tips for faster and better application development and the traditional Birds-of-a-Feather sessions in the evening. Java sessions for today and tomorrow: - Next Gen Enterprise Apps - Bert Ertman and Paul Bakker talked about new Java EE 6 APIs that reduces the need for boilerplate code and configuration. - JavaFX 2.0 – A Java developer’s guide - Stephen Chin and Peter Pilgrim will give an overview of new version and how Java developers can take advantage of it - Java Rich Clients with JavaFX 2.0 - Richard Bair and Jasper Potts will get into JavaFX 2.0 APIs - Building an end-to-end application using Java EE 6 and NetBeans - Arun Gupta will showcase how to write Java EE 6 applications more effectively. - The OpenJDK Community BOF with Dalibor Topic Starting Tuesday, come by the Oracle booth to chat about technology, enter our raffle and have a beer every day at 18:45 The sessions will be available on Parleys website after the conference. In the meantime, you can learn a lot about those Java technologies on our website: - JavaFX 2.0 tutorials and documentation - OpenJDK - News from the GlassFish community - JavaEE 6 resources - JavaOne sessions

    Read the article

  • Using SPServices &amp; jQuery to Find My Stuff from Multi-Select Person/Group Field

    - by Mark Rackley
    Okay… quick blog post for all you SPServices fans out there. I needed to quickly write a script that would return all the tasks currently assigned to me.  I also wanted it to return any task that was assigned to a group I belong to. This can actually be done with a CAML query, so no big deal, right?  The rub is that the “assigned to” field is a multi-select person or group field. As far as I know (and I actually know so little) you cannot just write a CAML query to return this information. If you can, please leave a comment below and disregard the rest of this blog post… So… what’s a hacker to do? As always, I break things down to their most simple components (I really love the KISS principle and would get it tattooed on my back if people wouldn’t think it meant “Knights In Satan’s Service”. You really gotta be an old far to get that reference).  Here’s what we’re going to do: Get currently logged in user’s name as it is stored in a person field Find all the SharePoint groups the current user belongs to Retrieve a set of assigned tasks from the task list and then find those that are assigned to current  user or group current user belongs to Nothing too hairy… So let’s get started Some Caveats before I continue There are some obvious performance implications with this solution as I make a total of four SPServices calls and there’s a lot of looping going on. Also, the CAML query in this blog has NOT been optimized. If you move forward with this code, tweak it so that it returns a further subset of data or you will see horrible performance if you have a few hundred entries in your task list. Add a date range to the CAML or something. Find some way to limit the results as much as possible. Lastly, if you DO have a better solution, I would like you to share. Iron sharpens iron and all…   Alright, let’s really get started. Get currently logged in user’s name as it is stored in a person field First thing we need to do is understand how a person group looks when you look at the XML returned from a SharePoint Web Service call. It turns out it’s stored like any other multi select item in SharePoint which is <id>;#<value> and when you assign a person to that field the <value> equals the person’s name “Mark Rackley” in my case. This is for Windows Authentication, I would expect this to be different in FBA, but I’m not using FBA. If you want to know what it looks like with FBA you can use the code in this blog and strategically place an alert to see the value.  Anyway… I need to find the name of the user who is currently logged in as it is stored in the person field. This turns out to be one SPServices call: var userName = $().SPServices.SPGetCurrentUser({                     fieldName: "Title",                     debug: false                     }); As you can see, the “Title” field has the information we need. I suspect (although again, I haven’t tried) that the Title field also contains the user’s name as we need it if I was using FBA. Okay… last thing we need to do is store our users name in an array for processing later: myGroups = new Array(); myGroups.push(userName); Find all the SharePoint groups the current user belongs to Now for the groups. How are groups returned in that XML stream?  Same as the person <ID>;#<Group Name>, and if it’s a mutli select it’s all returned in one big long string “<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>”.  So, how do we find all the groups the current user belongs to? This is also a simple SPServices call. Using the “GetGroupCollectionFromUser” operation we can find all the groups a user belongs to. So, let’s execute this method and store all our groups. $().SPServices({       operation: "GetGroupCollectionFromUser",       userLoginName: $().SPServices.SPGetCurrentUser(),       async: false,       completefunc: function(xData, Status) {          $(xData.responseXML).find("[nodeName=Group]").each(function() {                 myGroups.push($(this).attr("Name"));          });         }     }); So, all we did in the above code was execute the “GetGroupCollectionFromUser” operation and look for the each “Group” node (row) and store the name for each group in our array that we put the user’s name in previously (myGroups). Now we have an array that contains the current user’s name as it will appear in the person field XML and  all the groups the current user belongs to. The Rest Now comes the easy part for all of you familiar with SPServices. We are going to retrieve our tasks from the Task list using “GetListItems” and look at each entry to see if it belongs to this person. If it does belong to this person we are going to store it for later processing. That code looks something like this: // get list of assigned tasks that aren't closed... *modify the CAML to perform better!*             $().SPServices({                   operation: "GetListItems",                   async: false,                   listName: "Tasks",                   CAMLViewFields: "<ViewFields>" +                             "<FieldRef Name='AssignedTo' />" +                             "<FieldRef Name='Title' />" +                             "<FieldRef Name='StartDate' />" +                             "<FieldRef Name='EndDate' />" +                             "<FieldRef Name='Status' />" +                             "</ViewFields>",                   CAMLQuery: "<Query><Where><And><IsNotNull><FieldRef Name='AssignedTo'/></IsNotNull><Neq><FieldRef Name='Status'/><Value Type='Text'>Completed</Value></Neq></And></Where></Query>",                     completefunc: function (xData, Status) {                         var aDataSet = new Array();                        //loop through each returned Task                         $(xData.responseXML).find("[nodeName=z:row]").each(function() {                             //store the multi-select string of who task is assigned to                             var assignedToString = $(this).attr("ows_AssignedTo");                             found = false;                            //loop through the persons name and all the groups they belong to                             for(var i=0; i<myGroups.length; i++) {                                 //if the person's name or group exists in the assigned To string                                 //then the task is assigned to them                                 if (assignedToString.indexOf(myGroups[i]) >= 0){                                     found = true;                                     break;                                 }                             }                             //if the Task belongs to this person then store or display it                             //(I'm storing it in an array)                             if (found){                                 var thisName = $(this).attr("ows_Title");                                 var thisStartDate = $(this).attr("ows_StartDate");                                 var thisEndDate = $(this).attr("ows_EndDate");                                 var thisStatus = $(this).attr("ows_Status");                                                                  var aDataRow=new Array(                                     thisName,                                     thisStartDate,                                     thisEndDate,                                     thisStatus);                                 aDataSet.push(aDataRow);                             }                          });                          SomeFunctionToDisplayData(aDataSet);                     }                 }); Some notes on why I did certain things and additional caveats. You will notice in my code that I’m doing an AssignedToString.indexOf(GroupName) to see if the task belongs to the person. This could possibly return bad results if you have SharePoint Group names that are named in such a way that the “IndexOf” returns a false positive.  For example if you have a Group called “My Users” and a group called “My Users – SuperUsers” then if a user belonged to “My Users” it would return a false positive on executing “My Users – SuperUsers”.IndexOf(“My Users”). Make sense? Just be aware of this when naming groups, we don’t have this problem. This is where also some fine-tuning can probably be done by those smarter than me. This is a pretty inefficient method to determine if a task belongs to a user, I mean what if a user belongs to 20 groups? That’s a LOT of looping.  See all the opportunities I give you guys to do something fun?? Also, why am I storing my values in an array instead of just writing them out to a Div? Well.. I want to pass my data to a jQuery library to format it all nice and pretty and an Array is a great way to do that. When all is said and done and we put all the code together it looks like:   $(document).ready(function() {         var userName = $().SPServices.SPGetCurrentUser({                     fieldName: "Title",                     debug: false                     });         myGroups = new Array();     myGroups.push(userName );       $().SPServices({       operation: "GetGroupCollectionFromUser",       userLoginName: $().SPServices.SPGetCurrentUser(),       async: false,       completefunc: function(xData, Status) {          $(xData.responseXML).find("[nodeName=Group]").each(function() {                 myGroups.push($(this).attr("Name"));          });                      // get list of assigned tasks that aren't closed... *modify this CAML to perform better!*             $().SPServices({                   operation: "GetListItems",                   async: false,                   listName: "Tasks",                   CAMLViewFields: "<ViewFields>" +                             "<FieldRef Name='AssignedTo' />" +                             "<FieldRef Name='Title' />" +                             "<FieldRef Name='StartDate' />" +                             "<FieldRef Name='EndDate' />" +                             "<FieldRef Name='Status' />" +                             "</ViewFields>",                   CAMLQuery: "<Query><Where><And><IsNotNull><FieldRef Name='AssignedTo'/></IsNotNull><Neq><FieldRef Name='Status'/><Value Type='Text'>Completed</Value></Neq></And></Where></Query>",                     completefunc: function (xData, Status) {                         var aDataSet = new Array();                         //loop through each returned Task                         $(xData.responseXML).find("[nodeName=z:row]").each(function() {                             //store the multi-select string of who task is assigned to                             var assignedToString = $(this).attr("ows_AssignedTo");                             found = false;                            //loop through the persons name and all the groups they belong to                             for(var i=0; i<myGroups.length; i++) {                                 //if the person's name or group exists in the assigned To string                                 //then the task is assigned to them                                 if (assignedToString.indexOf(myGroups[i]) >= 0){                                     found = true;                                     break;                                 }                             }                            //if the Task belongs to this person then store or display it                             //(I'm storing it in an array)                             if (found){                                 var thisName = $(this).attr("ows_Title");                                 var thisStartDate = $(this).attr("ows_StartDate");                                 var thisEndDate = $(this).attr("ows_EndDate");                                 var thisStatus = $(this).attr("ows_Status");                                                                  var aDataRow=new Array(                                     thisName,                                     thisStartDate,                                     thisEndDate,                                     thisStatus);                                 aDataSet.push(aDataRow);                             }                          });                          SomeFunctionToDisplayData(aDataSet);                     }                 });       }    });  }); Final Thoughts So, there you have it. Take it and run with it. Make it something cool (and tell me how you did it). Another possible way to improve performance in this scenario is to use a DVWP to display the tasks and use jQuery and the “myGroups” array from this blog post to hide all those rows that don’t belong to the current user. I haven’t tried it, but it does move some of the processing off to the server (generating the view) so it may perform better.  As always, thanks for stopping by… hope you have a Merry Christmas…

    Read the article

  • When someone deletes a shared data source in SSRS

    - by Rob Farley
    SQL Server Reporting Services plays nicely. You can have things in the catalogue that get shared. You can have Reports that have Links, Datasets that can be used across different reports, and Data Sources that can be used in a variety of ways too. So if you find that someone has deleted a shared data source, you potentially have a bit of a horror story going on. And this works for this month’s T-SQL Tuesday theme, hosted by Nick Haslam, who wants to hear about horror stories. I don’t write about LobsterPot client horror stories, so I’m writing about a situation that a fellow MVP friend asked me about recently instead. The best thing to do is to grab a recent backup of the ReportServer database, restore it somewhere, and figure out what’s changed. But of course, this isn’t always possible. And it’s much nicer to help someone with this kind of thing, rather than to be trying to fix it yourself when you’ve just deleted the wrong data source. Unfortunately, it lets you delete data sources, without trying to scream that the data source is shared across over 400 reports in over 100 folders, as was the case for my friend’s colleague. So, suddenly there’s a big problem – lots of reports are failing, and the time to turn it around is small. You probably know which data source has been deleted, but getting the shared data source back isn’t the hard part (that’s just a connection string really). The nasty bit is all the re-mapping, to get those 400 reports working again. I know from exploring this kind of stuff in the past that the ReportServer database (using its default name) has a table called dbo.Catalog to represent the catalogue, and that Reports are stored here. However, the information about what data sources these deployed reports are configured to use is stored in a different table, dbo.DataSource. You could be forgiven for thinking that shared data sources would live in this table, but they don’t – they’re catalogue items just like the reports. Let’s have a look at the structure of these two tables (although if you’re reading this because you have a disaster, feel free to skim past). Frustratingly, there doesn’t seem to be a Books Online page for this information, sorry about that. I’m also not going to look at all the columns, just ones that I find interesting enough to mention, and that are related to the problem at hand. These fields are consistent all the way through to SQL Server 2012 – there doesn’t seem to have been any changes here for quite a while. dbo.Catalog The Primary Key is ItemID. It’s a uniqueidentifier. I’m not going to comment any more on that. A minor nice point about using GUIDs in unfamiliar databases is that you can more easily figure out what’s what. But foreign keys are for that too… Path, Name and ParentID tell you where in the folder structure the item lives. Path isn’t actually required – you could’ve done recursive queries to get there. But as that would be quite painful, I’m more than happy for the Path column to be there. Path contains the Name as well, incidentally. Type tells you what kind of item it is. Some examples are 1 for a folder and 2 a report. 4 is linked reports, 5 is a data source, 6 is a report model. I forget the others for now (but feel free to put a comment giving the full list if you know it). Content is an image field, remembering that image doesn’t necessarily store images – these days we’d rather use varbinary(max), but even in SQL Server 2012, this field is still image. It stores the actual item definition in binary form, whether it’s actually an image, a report, whatever. LinkSourceID is used for Linked Reports, and has a self-referencing foreign key (allowing NULL, of course) back to ItemID. Parameter is an ntext field containing XML for the parameters of the report. Not sure why this couldn’t be a separate table, but I guess that’s just the way it goes. This field gets changed when the default parameters get changed in Report Manager. There is nothing in dbo.Catalog that describes the actual data sources that the report uses. The default data sources would be part of the Content field, as they are defined in the RDL, but when you deploy reports, you typically choose to NOT replace the data sources. Anyway, they’re not in this table. Maybe it was already considered a bit wide to throw in another ntext field, I’m not sure. They’re in dbo.DataSource instead. dbo.DataSource The Primary key is DSID. Yes it’s a uniqueidentifier... ItemID is a foreign key reference back to dbo.Catalog Fields such as ConnectionString, Prompt, UserName and Password do what they say on the tin, storing information about how to connect to the particular source in question. Link is a uniqueidentifier, which refers back to dbo.Catalog. This is used when a data source within a report refers back to a shared data source, rather than embedding the connection information itself. You’d think this should be enforced by foreign key, but it’s not. It does allow NULLs though. Flags this is an int, and I’ll come back to this. When a Data Source gets deleted out of dbo.Catalog, you might assume that it would be disallowed if there are references to it from dbo.DataSource. Well, you’d be wrong. And not because of the lack of a foreign key either. Deleting anything from the catalogue is done by calling a stored procedure called dbo.DeleteObject. You can look at the definition in there – it feels very much like the kind of Delete stored procedures that many people write, the kind of thing that means they don’t need to worry about allowing cascading deletes with foreign keys – because the stored procedure does the lot. Except that it doesn’t quite do that. If it deleted everything on a cascading delete, we’d’ve lost all the data sources as configured in dbo.DataSource, and that would be bad. This is fine if the ItemID from dbo.DataSource hooks in – if the report is being deleted. But if a shared data source is being deleted, you don’t want to lose the existence of the data source from the report. So it sets it to NULL, and it marks it as invalid. We see this code in that stored procedure. UPDATE [DataSource]    SET       [Flags] = [Flags] & 0x7FFFFFFD, -- broken link       [Link] = NULL FROM    [Catalog] AS C    INNER JOIN [DataSource] AS DS ON C.[ItemID] = DS.[Link] WHERE    (C.Path = @Path OR C.Path LIKE @Prefix ESCAPE '*') Unfortunately there’s no semi-colon on the end (but I’d rather they fix the ntext and image types first), and don’t get me started about using the table name in the UPDATE clause (it should use the alias DS). But there is a nice comment about what’s going on with the Flags field. What I’d LIKE it to do would be to set the connection information to a report-embedded copy of the connection information that’s in the shared data source, the one that’s about to be deleted. I understand that this would cause someone to lose the benefit of having the data sources configured in a central point, but I’d say that’s probably still slightly better than LOSING THE INFORMATION COMPLETELY. Sorry, rant over. I should log a Connect item – I’ll put that on my todo list. So it sets the Link field to NULL, and marks the Flags to tell you they’re broken. So this is your clue to fixing it. A bitwise AND with 0x7FFFFFFD is basically stripping out the ‘2’ bit from a number. So numbers like 2, 3, 6, 7, 10, 11, etc, whose binary representation ends in either 11 or 10 get turned into 0, 1, 4, 5, 8, 9, etc. We can test for it using a WHERE clause that matches the SET clause we’ve just used. I’d also recommend checking for Link being NULL and also having no ConnectionString. And join back to dbo.Catalog to get the path (including the name) of broken reports are – in case you get a surprise from a different data source being broken in the past. SELECT c.Path, ds.Name FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; When I just ran this on my own machine, having deleted a data source to check my code, I noticed a Report Model in the list as well – so if you had thought it was just going to be reports that were broken, you’d be forgetting something. So to fix those reports, get your new data source created in the catalogue, and then find its ItemID by querying Catalog, using Path and Name to find it. And then use this value to fix them up. To fix the Flags field, just add 2. I prefer to use bitwise OR which should do the same. Use the OUTPUT clause to get a copy of the DSIDs of the ones you’re changing, just in case you need to revert something later after testing (doing it all in a transaction won’t help, because you’ll just lock out the table, stopping you from testing anything). UPDATE ds SET [Flags] = [Flags] | 2, [Link] = '3AE31CBA-BDB4-4FD1-94F4-580B7FAB939D' /*Insert your own GUID*/ OUTPUT deleted.Name, deleted.DSID, deleted.ItemID, deleted.Flags FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; But please be careful. Your mileage may vary. And there’s no reason why 400-odd broken reports needs to be quite the nightmare that it could be. Really, it should be less than five minutes. @rob_farley

    Read the article

< Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >