Search Results

Search found 932 results on 38 pages for 'patrick rood'.

Page 6/38 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Paging problem in Data Form Webpart SP2010

    - by Patrick Olurotimi Ige
    I was working on some webpart in sharepoint designer 2010  and i decided to use the default custom paging.But i noticed the previous link page isn't working it basicalling just takes me back to the start page of the list and not the previous page after a good look i noticed micosoft is using "history.back()" which is suppose to work but it doesn't work well for paged data.Anyway before i started further investigation i found Hani Amr's solution at the right time and that did the trick.Hope that helps

    Read the article

  • Is it better to define all routes in the Global.asax than to define separately in the areas?

    - by Matthew Patrick Cashatt
    I am working on a MVC 4 project that will serve as an API layer of a larger application. The developers that came before me set up separate Areas to separate different API requests (i.e Search, Customers, Products, and so forth). I am noticing that each Area has separate Area registration classes that define routes for that area. However, the routes defined are not area-specific (i.e. {controller}/{action}/{id} might be defined redundantly in a couple of areas). My instinct would be to move all of these route definitions to a common place like the Global.asax to avoid redundancy and collisions, but I am not sure if I am correct about that.

    Read the article

  • How do I source a shell script for Node Version Manager?

    - by Matthew Patrick Cashatt
    Hi and thanks for looking! I am new to Linux/Ubuntu, but I have set up an Ubuntu box on which to run Node.js. I have had moderate success, but now I need to be able to easily upgrade my version of Node. Many folks recommend using Node Version Manager. I followed the directions, but when I try to do something like this: nvm ls I get a messaging stating that No command NVM found I have gone back to check the steps I followed to install NVM, but there is one part that is tricky for may and I think to be the culprit: sourcing the file for bash. From the instructions: To activate nvm, you need to source it from your bash shell . ~/nvm/nvm.sh I always add this line to my ~/.bashrc or ~/.profile file to have it automatically sources upon login. Often I also put in a line to use a specific version of node. So which file should I add this to? I am guessing profile since it's ubuntu?? Also, where in the file do I add this line? After I have added this line, do I need to reboot or anything? Any help would be deeply appreciated--especially if you can show me an example profile file with . ~/nvm/nvm.sh integrated so that I can see usage. Thanks, Matt

    Read the article

  • Sound works for only one user at a time

    - by Patrick
    I've noticed that sound becomes unavailable to me when someone else is logged into my machine and playing music (or has facebook open) in the other account. I've had to ask them to unlock their account and turn it off so I can get sound in my own stuff. Even in sound preferences, the hardware itself disappears and output is "dummy sound". Is there a way to prevent this from happening? What would be really good is if I could turn down the volume (or mute entirely) all the sounds on all other accounts on a per-user basis from my sound preferences without affecting whatever setting they have - essentially saying whenever user A is logged in, all sounds from user B's account are muted and anything from user C's account is at 50% while I can still have my own at full volume.

    Read the article

  • using GNU GPL v2 software as pointers to solution to problem

    - by Patrick
    I am coding a PHP serial access class and have been taking pointers from the PHP-serial class on Google Code (here). That class is based on PHP 4 and I'm creating a PHP 5 class that allows more functionality and is specific to some business demands I have. There is no code copied and I have done all the coding. Does the class I'm writing fall under the Google Code's GPL or am I free to select a license that I feel is appropriate? I'm not sure of the standard that applies to licensing when you are only looking to another work for pointers.

    Read the article

  • Permissions issues with mounting remote server into a specific folder

    - by Patrick
    I'm doing the following to mount a remote server to a specific path on my server: sshfs [email protected]:/backup/folder/ /home/myuser/server-backups/ However when I mount the server the folder permissions change (they become 700), and when I test my rsnapshot.conf file I get the following error: snapshot_root /home/myuser/server-backups/ - snapshot_root exists \ but is not readable What am I doing wrong ? should I mount the remote server with another user ?

    Read the article

  • "Backup Intervals" in rsnapshot.conf?

    - by Patrick
    A simple question about rsnapshot. In order to perform daily backups I'm going to add lines to cron in my Ubuntu. Then, why do I have also these lines in the rsnapshot.conf ? ######################################### # BACKUP INTERVALS # # Must be unique and in ascending order # # i.e. hourly, daily, weekly, etc. # ######################################### interval hourly 6 interval daily 7 interval weekly 4 #interval monthly 3 If I use cron, should I disable them ? thanks ps. I've just realized that in the crontab I still have "hourly" and "daily". Should I then uncomment only the one I use in the crontab ? And what's the point to specify hourly if it is already specified in cron ? I'm a bit confused. # crontab -e 0 */4 * * * /usr/local/bin/rsnapshot hourly 30 23 * * * /usr/local/bin/rsnapshot daily

    Read the article

  • 'tools.jar' is not in IDEA classpath

    - by Patrick
    I am a new user of Linux, it has been recommended to me by my friend. He told me to install software called IntelliJ Idea IDE. Well I have been following the tutorial. But now when I try to open "idea.sh", an error message pops-up: 'tools.jar' is not in IDEA classpath. Please ensure JAVA_HOME points to JDK rather than JRE. Please remember that I'm new to Ubuntu and I'm planning for a nice long stay once I get myself into it :) Also I do not know if I am running a correct Java6 JDK. When I do java -version, this is what I get: java version "1.6.0_23" OpenJDK Runtime Environment (IcedTea6 1.11pre) (6b23~pre10-0ubuntu5) OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode) Thank You for reading this and I hope I will get a nice response.

    Read the article

  • Nvidia Driver versions?

    - by Patrick Krenz
    I've looked all over and can't find any reason as to why or how Nvidia names their drivers. for example they have a 330.xxx/340.xxx series that are current but also a 300.xxx and i've found that they aren't always release in order by number. Here's an example on there site with version and release date 331.38 - January 13 334.16 - Feb 7 331.49 - Feb 18 I'm really confused about what driver to actually go with, a few different series versions seem to work adequately and I just want to have an understanding of it and what the best option to work from would be. I really appreciate any information

    Read the article

  • Why is the use of abstractions (such as LINQ) so taboo?

    - by Matthew Patrick Cashatt
    I am an independent contractor and, as such, I interview 3-4 times a year for new gigs. I am in the midst of that cycle now and got turned down for an opportunity even though I felt like the interview went well. The same thing has happened to me a couple of times this year. Now, I am not a perfect guy and I don't expect to be a good fit for every organization. That said, my batting average is lower than usual so I politely asked my last interviewer for some constructive feedback, and he delivered! The main thing, according to the interviewer, was that I seemed to lean too much towards the use of abstractions (such as LINQ) rather than towards lower-level, organically grown algorithms. On the surface, this makes sense--in fact, it made the other rejections make sense too because I blabbed about LINQ in those interviews as well and it didn't seem that the interviewers knew much about LINQ (even though they were .NET guys). So now I am left with this question: If we are supposed to be "standing on the shoulders of giants" and using abstractions that are available to us (like LINQ), then why do some folks consider it so taboo? Doesn't it make sense to pull code "off the shelf" if it accomplishes the same goals without extra cost? It would seem to me that LINQ, even if it is an abstraction, is simply an abstraction of all the same algorithms one would write to accomplish exactly the same end. Only a performance test could tell you if your custom approach was better, but if something like LINQ met the requirements, why bother writing your own classes in the first place? I don't mean to focus on LINQ here. I am sure that the JAVA world has something comparable, I just would like to know why some folks get so uncomfortable with the idea of using an abstraction that they themselves did not write. UPDATE As Euphoric pointed out, there isn't anything comparable to LINQ in the Java world. So, if you are developing on the .NET stack, why not always try and make use of it? Is it possible that people just don't fully understand what it does?

    Read the article

  • Can I make a bootable USB flash drive for Mac from Windows

    - by Patrick
    Problem: MacBook hard drive crashed and is ruined. I need to work on a music assignment on a program only available for Mac OS X and Ubuntu, and will not be able to get a new hard drive for the Mac before the assignment is due. I only have non-administrator access to Windows XP and 7 computers. Can I make a USB drive with Ubuntu on it so I can use my MacBook with this? Can I create this from a Windows computer? Please give detailed steps, if possible, for I am a noob when it comes to computers, and especially Linux.

    Read the article

  • How to efficiently protect part of an application with a license

    - by Patrick
    I am working on an application that has many functional parts. When a customer buys the application, he buys the standard functionality, but he can also buy some additional elements of the application for an additional price. All of the elements are part of the same application executable. A license key is used to indicate which of the elements should be accessible in the application. Some of the elements can be easily disabled if the user didn't pay for it. These are typically the modules that you can access via the application's menu. However, some elements give more problems: What if a part of the data model is related to an optional part? Do I build up these data structures in my application so the rest of my application can just assume they're always there? Or do I don't build them, and add checks in the rest of may application? What if some optional part is still useful to perform some internal tasks, but I don't want to expose it to the user externally? What if the marketing responsible wants to make a standard part now an optional part? In all of my application I assume that that part is present, but if it becomes optional, I should add checks on it everywhere in the application. I have some ideas on how to solve some of the problems (e.g. interfaces with dual implementations: one working implementation, and one that is activated if the optional part is not activated). Do you know of any patterns that can be used to solve this kind of problem? Or do you have any suggestions on how to handle this licensing problem? Thanks.

    Read the article

  • Does LINQ require significantly more processing cycles and memory than lower-level data iteration techniques?

    - by Matthew Patrick Cashatt
    Background I am recently in the process of enduring grueling tech interviews for positions that use the .NET stack, some of which include silly questions like this one, and some questions that are more valid. I recently came across an issue that may be valid but I want to check with the community here to be sure. When asked by an interviewer how I would count the frequency of words in a text document and rank the results, I answered that I would Use a stream object put the text file in memory as a string. Split the string into an array on spaces while ignoring punctuation. Use LINQ against the array to .GroupBy() and .Count(), then OrderBy() said count. I got this answer wrong for two reasons: Streaming an entire text file into memory could be disasterous. What if it was an entire encyclopedia? Instead I should stream one block at a time and begin building a hash table. LINQ is too expensive and requires too many processing cycles. I should have built a hash table instead and, for each iteration, only added a word to the hash table if it didn't otherwise exist and then increment it's count. The first reason seems, well, reasonable. But the second gives me more pause. I thought that one of the selling points of LINQ is that it simply abstracts away lower-level operations like hash tables but that, under the veil, it is still the same implementation. Question Aside from a few additional processing cycles to call any abstracted methods, does LINQ require significantly more processing cycles to accomplish a given data iteration task than a lower-level task (such as building a hash table) would?

    Read the article

  • 'unknown filesystem' grub rescue prompt; trying to wipe drive and boot 10.10 live

    - by Patrick
    Im currently running Win7, and want to wipe the drive and install 10.10. I have 10.10 loaded on a USB thumbdrive and it sees the device in BIOS but it only reaches a screen saying; Unknown Filesystem grub rescue> Ive read several results from google and a couple here where people are trying to dual boot and i assume save the data on the drive, but i dont care about doing that, and would prefer to just wipe the drive and start fresh. What steps can i take to get the drive to a point where i can load 10.10 live and get it installed?

    Read the article

  • Why is this happening with the wine menu?

    - by Patrick
    Some of the items in the Wine menu are given a prefix that is their entire path. The items that don't have the long prefix seem to work fine, but those that do, don't respond to the Properties button or double-click in the menu editor. They take a lot of space, and look ugly, but I can't rename them. I've tried editing their associated files, there doesn't appear to be anything different about them to the ones that are working fine. They weren't always like that - it just happened after an upgrade one day and it's been like that ever since.

    Read the article

  • What are the benefits vs costs of comment annotation in PHP?

    - by Patrick
    I have just started working with symfony2 and have run across comment annotations. Although comment annotation is not an inherent part of PHP, symfony2 adds support for this feature. My understanding of commenting is that it should make the code more intelligible to the human. The computer shouldn't care what is in comments. What benefits come from doing this type of annotation versus just putting a command in the normal PHP code? ie- /** * @Route("/{id}") * @Method("GET") * @ParamConverter("post", class="SensioBlogBundle:Post") * @Template("SensioBlogBundle:Annot:post.html.twig", vars={"post"}) * @Cache(smaxage="15") */ public function showAction(Post $post) { }

    Read the article

  • filtering dates in a data view webpart when using webservices datasource

    - by Patrick Olurotimi Ige
    I was working on a data view web part recently and i had  to filter the data based on dates.Since the data source was web services i couldn't use  the Offset which i blogged about earlier.When using web services to pull data in sharepoint designer you would have to use xpath.So for example this is the soap that populates the rows<xsl:variable name="Rows" select="/soap:Envelope/soap:Body/ddw1:GetListItemsResponse/ddw1:GetListItemsResult/ddw1:listitems/rs:data/z:row/>But you would need to add some predicate [] and filter the date nodes.So you can do something like this (marked in red)<xsl:variable name="Rows" select="/soap:Envelope/soap:Body/ddw1:GetListItemsResponse/ddw1:GetListItemsResult/ddw1:listitems/rs:data/z:row[ddwrt:FormatDateTime(string(@ows_Created),1033,'yyyyMMdd') &gt;= ddwrt:FormatDateTime(string(substring-after($fd,'#')),1033,'yyyyMMdd')]"/>For the filtering to work you need to have the date formatted  above as yyyyMMdd.One more thing you must have noticed is the $fd variable.This variable is created by me creating a calculated column in the list so something like this [Created]-2So basically that the xpath is doing is get me data only when the Created date  is greater than or equal to the Created date -2 which is 2 date less than the created date.Also not that when using web services in sharepoint designer and try to use the default filtering you won't get to see greater tha or less than in the option list comparison.:(Hope this helps.

    Read the article

  • How to automate mysql backups?

    - by Patrick
    hi, I want to automatize the backup of my databases and files with cron. Should I add the following lines to crontab ? mysqldump -u root -pPASSWORD database_name | gzip > /home/backup/database_`date +\%m-\%d-\%Y`.sql.gz svn commit -m "Committing the working copy containing the database dump" 1) First of all, is this a good approach? 2) It is not clear how to specify the repository and the working copy with svn. 3) How can I run svn only when the mysqldump is done and not before ? Avoiding conflicts Any other tip ? thanks

    Read the article

  • Installing APC on lighttpd + php 5.2

    - by Patrick
    I've found this tutorial to install APC on servers with lighttpd + php 5.2 on Ubuntu 10. However, when I run sudo pecl install apc the package is just downloaded and is not installed. (i.e. I'm not asked the next question" and apc.ini file is not created at all. If I run only pecl install apc I get a warning (no permissions to write some files). (I need instructions for both 9.04 and 10.04) thanks

    Read the article

  • The error indicates that IIS is in 32 bit mode, while this application is a 64 b it application and thus not compatible.

    - by Patrick Olurotimi Ige
    I was trying to install a new WSS v3 Sharepoint on a 64 bit Windows 2003 server today but the installation was giving some error saying i would need to allow ASP.NET 2.0 in the web server extension in IIS.  Looking at the IIS there was a ASP.NET 2.0 32 bit allowed but not for a 64 bit. I tried registering the aspnet_regiis but no luck by doing so: For the 32 bit verison %SYSTEMROOT%\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe -i For the 64bit version %SYSTEMROOT%\Microsoft.NET\Framework64\v2.0.50727\aspnet_regiis.exe -i I get the error "The error indicates that IIS is in 32 bit mode, while this application is a 64 b it application and thus not compatible." The difference is the \Framework64 folders So my next guess was to find a way to disable the 32 bit and then allow the 64 bit version. And luckily enough i found this link    MS to the rescue So just ran : cscript %SYSTEMDRIVE%\inetpub\adminscripts\adsutil.vbs SET W3SVC/AppPools/Enable32bitAppOnWin64 0 and the registered the %SYSTEMROOT%\Microsoft.NET\Framework64\v2.0.50727\aspnet_regiis.exe -i and that was it

    Read the article

  • window.scrollBy only works in Firefox !? [closed]

    - by Patrick
    In my website I have this javascript code, adding a vertical offset when in the url a specific section of the page is specified (#): if (!!window.location.hash) window.scrollBy(0,-60); However this only works in Firefox... I'm pretty sure window.location.hash works in all browsers, that is, the symbol "sharp" is correctly detected in the url. However, the -60 offset only works in Firefox... this is the url, could you give me some insight ? http://patrickdiviacco.co.cc/#432 thanks

    Read the article

  • Merging the Executive Committees

    - by Patrick Curran
    As I explained in this blog last year, we use the Process to change the Process. The first of three planned JSRs to modify the way the JCP operates (JSR 348: Towards a new version of the Java Community Process) completed in October 2011. That JSR focused on changes to make our process more transparent and to enable broader participation. The second JSR was inspired by our conviction that Java is One Platform and by our expectation that Java ME and Java SE will become more aligned over time. In anticipation of this change JSR 355: JCP Executive Committee Merge will merge the two Executive Committees into one. The JSR is going very well. We have reached consensus within the Executive Committees, which serve as the Expert Group for process-change JSRs. How we intend to make the transition to a single EC is explained in the revised versions of the Process and EC Standing Rules documents that are currently posted for Early Draft Review. Our intention is to reduce the total number of EC seats but to keep the same ratio (2:1) of ratified and elected seats. Briefly, the plan will be implemented in two stages. The October 2012 elections will be held as usual, but candidates will be informed that they will serve only a one-year term if elected. The two ECs will be merged immediately after this election; at the same time, Oracle's second permanent seat and one of IBM's two ratified seats will be eliminated. The initial merged EC will therefore have 30 members. In the October 2013 elections we will eliminate three more ratified seats and two elected seats, thereby reducing the size of the combined EC to 25 members (16 ratified seats, 8 elected seats, plus Oracle's permanent seat.) All remaining seats, including those of members who were elected in 2012, will be up for re-election in 2013; that election should be particularly interesting. Starting in 2013 we will change from a three-year to a two-year election cycle (half of all EC members will be up for re-election each year.) We believe that these changes will streamline our operations, and position us for a future in which the distinctions between desktop and mobile devices become increasingly blurred. Please take this opportunity to review and comment on our proposed changes - we appreciate your input. Thank you, and onward to JCP.next.3!

    Read the article

  • JCP.next.3: time to get to work

    - by Patrick Curran
    As I've previously reported in this blog, we planned three JSRs to improve the JCP’s processes and to meet our members’ expectations for change. The first - JCP.next.1, or more formally JSR 348: Towards a new version of the Java Community Process - was completed in October 2011. This focused on a small number of simple but important changes to make our process more transparent and to enable broader participation. We're already seeing the benefits of these changes as new and existing JSRs adopt the new requirements. However, because we wanted to complete this JSR quickly we deliberately postponed a number of more complex items, including everything that would require modifying the JSPA (the legal agreement that members sign when they join the organization) to a follow-on JSR. The second JSR (JSR 355: JCP Executive Committee Merge) is in progress now and will complete later this year. This JSR is even simpler than the first, and is focused solely on merging the two Executive Committees into one for greater efficiency and to encourage synergies between the Java ME and Java SE platforms. Continuing the momentum to move Java and the JCP forward we have just filed the third JSR (JCP.next.3) as JSR 358: A major revision of the Java Community Process. This JSR will modify the JSPA as well as the Process Document, and will tackle a large number of complex issues, many of them postponed from JSR 348. For these reasons we expect to spend a considerable amount of time working on it - at least a year, and probably more. The current version of the JSPA was created back in 2002, although some minor changes were introduced in 2005. Since then the organization and the environment in which we operate have changed significantly, and it is now time to revise our processes to ensure that they meet our current needs. We have a long list of topics to be considered, including the role of independent implementations (those not derived from the Reference Implementation), licensing and open source, ensuring that our new transparency requirements are implemented correctly, compatibility policy and TCKs, the role of individual members, patent policy, and IP flow. The Expert Group for JSR 358, as with all process-change JSRs, consists of all members of the Executive Committees. Even though the JSR has just been filed we started discussions on the various topics several months ago (see the EC's meeting minutes for details) and our EC members - including the new members who joined within the last year or two - are actively engaged. Now it's your opportunity to get involved. As required by version 2.8 of our Process (introduced with JSR 348) we will conduct all our business in the open. We have a public java.net project where you can follow and participate in our work. All of our deliberations will be copied to a public Observer mailing list, we'll track our issues on a public Issue Tracker, and all our documents (meeting agendas and minutes, task lists, working drafts) will be published in our Document Archive. We're just getting started, but we do want your input. Please visit us on java.net where you can learn how to participate. Let's get to work...

    Read the article

  • how to fully unit test functions and their internal validation

    - by Patrick
    I am just now getting into formal unit testing and have come across an issue in testing separate internal parts of functions. I have created a base class of data manipulation (i.e.- moving files, chmodding file, etc) and in moveFile() I have multiple levels of validation to pinpoint when a moveFile() fails (i.e.- source file not readable, destination not writeable). I can't seem to figure out how to force a couple particular validations to fail while not tripping the previous validations. Example: I want the copying of a file to fail, but by the time I've gotten to the actual copying, I've checked for everything that can go wrong before copying. Code Snippit: (Bad code on the fifth line...) // if the change permissions is set, change the file permissions if($chmod !== null) { $mod_result = chmod($destination_directory.DIRECTORY_SEPARATOR.$new_filename, $chmod); if($mod_result === false || $source_directory.DIRECTORY_SEPARATOR.$source_filename == '/home/k...../file_chmod_failed.qif') { DataMan::logRawMessage('File permissions update failed on moveFile [ERR0009] - ['.$destination_directory.DIRECTORY_SEPARATOR.$new_filename.' - '.$chmod.']', sfLogger::ALERT); return array('success' => false, 'type' => 'Internal Server Error [ERR0009]'); } } So how do I simulate the copy failing. My stop-gap measure was to perform a validation on the filename being copied and if it's absolute path matched my testing file, force the failure. I know this is very bad to put testing code into the actual code that will be used to run on the production server but I'm not sure how else to do it. Note: I am on PHP 5.2, symfony, using lime_test(). EDIT I am testing the chmodding and ensuring that the array('success' = false, 'type' = ..) is returned

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >