Search Results

Search found 16489 results on 660 pages for 'personal folder'.

Page 140/660 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • vb.net, How can I limit a textchanged event for a textbox to keyboard input only?

    - by Luay
    Hi everyone, Please allow me to explain what I have and what I am trying to achieve. I have a textbox (called txtb1) and a button under it (called btn_browse) on a winform in a vb.net project. When the user clicks the button a folder browser dialog appears. The user selects his desired folder and when he/she clicks 'ok' the dialog closes and the path of the folder selected appears in the textbox. I also want to store that value in a variable to be used somewhere else(the value will be copied to an xml file when the user clicks 'apply' on the form, but this has no effect nor is related to my problem). To achieve that I have the following code: Public myVar As String Private Sub btn_browse_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btn_browse.Click Dim f As New FolderBrowserDialog If f.ShowDialog() = DialogResult.OK Then txtb1.Text = f.SelectedPath End If myVar = txtb1.text f.Dispose() End Sub This part works with no problems. Now, what if the user either: 1- decides to enter the path manually rather than use the browse button. or, 2- after using the browse button and selecting the folder they decide to manually change the location In trying to solve this I added a textchanged event to the textbox as follows: Private Sub txtb1_TextChanged(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles txtb1.TextChanged myVar = txtb1.Text End Sub However, this is not working. Apparently, and I don't know if this is relevant, when the user selects the desired folder using the browse button the textchanged event is also triggered. and when I click on the textbox (to give it focus) and press any keyboard key the application simply stops responding. So my questions are: am I going about this the right way? if my logic is flawed, could someone point me to how usually such a thing could be achieved? is it possible to limit the triggering events to only keyboard input as a way around this? I tried the keydown and keypress events but I am getting the freeze. I would be grateful for your help. Thanks

    Read the article

  • Is there a free web file manager like Plesk or cPanel in ASP.Net

    - by Ron Klein
    I'm looking for a free, open-sourced web application written in C#/VB.Net on top of ASP.Net, which functions like Plesk or cPanel when it comes to (remote) file management. Something that simulates a regular FTP client, but actually displays web pages over HTTP, with the following functions: Create Folder Rename File/Folder Delete File/Folder Change Timestamp ("Touch") Move Archive etc. I saw a few commercial tools, but nothing when it comes to OSS. Any ideas? links?

    Read the article

  • Efficient way to create a large number of SharePoint folders

    - by BeraCim
    Hi all: I'm currently creating a large number of SharePoint folders within a list (e.g. ~800 folders), with each folder containing a different number of items. The way it is currently done is that it programmatically reads off the content types, items, event listeners and the likes off the same folder from another web, then creates the same folder in the current web. That ran reasonably fine and fast on a dev environment. However when it goes to an environment with WFEs and farms, it slowed down a lot. I have checked that there are no leaks in the code, and that the code follows SharePoint coding best practices. At the moment I'm looking at it at the code level. From your experience, are there any efficient ways of creating a large number of SharePoint folders, lists and items? EDIT: I'm currently using SharePoint API, but will be looking at moving to using Web Service in the future. I'm interested in looking at both options though. Code wise, its just the general reading of a folder and its content types plus items and their details, then create the same folder in the same list with the same content types, then copy over the items using patch update. I want to know whether there are more efficient ways of doing the above. Thanks.

    Read the article

  • How to delete multiple files with msbuild/web deployment project?

    - by Alex
    I have an odd issue with how msbuild is behaving with a VS2008 Web Deployment Project and would like to know why it seems to randomly misbehave. I need to remove a number of files from a deployment folder that should only exist in my development environment. The files have been generated by the web application during dev/testing and are not included in my Visual Studio project/solution. The configuration I am using is as follows: <!-- Partial extract from Microsoft Visual Studio 2008 Web Deployment Project --> <ItemGroup> <DeleteAfterBuild Include="$(OutputPath)data\errors\*.xml" /> <!-- Folder 1: 36 files --> <DeleteAfterBuild Include="$(OutputPath)data\logos\*.*" /> <!-- Folder 2: 2 files --> <DeleteAfterBuild Include="$(OutputPath)banners\*.*" /> <!-- Folder 3: 1 file --> </ItemGroup> <Target Name="AfterBuild"> <Message Text="------ AfterBuild process starting ------" Importance="high" /> <Delete Files="@(DeleteAfterBuild)"> <Output TaskParameter="DeletedFiles" PropertyName="deleted" /> </Delete> <Message Text="DELETED FILES: $(deleted)" Importance="high" /> <Message Text="------ AfterBuild process complete ------" Importance="high" /> </Target> The problem I have is that when I do a build/rebuild of the Web Deployment Project it "sometimes" removes all the files but other times it will not remove anything! Or it will remove only one or two of the three folders in the DeleteAfterBuild item group. There seems to be no consistency in when the build process decides to remove the files or not. When I've edited the configuration to include only Folder 1 (for example), it removes all the files correctly. Then adding Folder 2 and 3, it starts removing all the files as I want. Then, seeming at random times, I'll rebuild the project and it won't remove any of the files! I have tried moving these items to the ExcludeFromBuild item group (which is probably where it should be) but it gives me the same unpredictable result. Has anyone experienced this? Am I doing something wrong? Why does this happen?

    Read the article

  • What files should be added to SVN in an eclipse Java project?

    - by Jake Petroules
    I have a Java project I'd like to commit to my SVN repository, created with eclipse. Now, what files (aside from the source code, obviously) are necessary? In the workspace root, there is a .settings folder with many files and subfolders, and inside the project folder there are two files - .classpath and .project, and another .settings folder with a single file - org.eclipse.jdt.core.prefs. Which of these files should be committed to SVN and which can be safely excluded?

    Read the article

  • Add new SVN "repo" in poorly constructed repo/project setup

    - by Dave Masselink
    Unfortunately, the answer to this question isn't quite as simple as it sounds... but I hope it can still be relatively simple. Please read all the way through before telling me that the answer is: "svnadmin create... duh" I'm working for a company that set up their SVN server in an odd way (at least in terms of what I'm used to). We've all been there, right? Rather than giving each project a separate repository... they have a folder on the server called "/var/www/svn/repos/" which is the actual SVN repo (has conf/, db/, README.txt, etc. in it). Then they distinguish their projects by adding top level folders into the ONE repository (ex: Project1, Project2, etc.) I don't like this setup and might one day get around to converting the setup to what I'm used to, where each project is its own repository (with separate logs, dbs, etc.) But my question is this: What is the best way to add a new empty project to the current setup? Is there anyway to add a new top level folder/project to the repo through use of svnadmin? It can/should just be an empty folder that I'll start building a new project in. I know that I could do this by checking out the whole singular repository and then adding a new top level folder into my local checkout, then re-committing. But I'd really prefer not to do this because someone has created folders/projects that are just GBs of log data... and I don't want to wait through the download of this just to add a single empty folder. Let me know if there is any more info you'd need to know. I do have root/sudo access on the server in question. Thanks in advance for your help! Dave

    Read the article

  • Downloading jQuery UI: Ok, so what part of this mess do I copy to the server?

    - by Martha
    From the "should be simple, but..." files: Trying to get started with jQuery UI. Went to the site, used their custom builder thingy to assemble the parts I need, made myself a custom theme using the Theme Roller, downloaded the zip file thus produced, unzipped it on my local drive. Ok, so I have 37 folders, 311 files, and a total of 2.4 MB. Ain't no way in hell all this is going on the server. What parts do I need to put there? 'css' 'custom-theme': jquery-ui-1.8.custom.css, 'images' subfolder with 12 .png images 'development-bundle' 'demos': demos.css, index.html, plus 18 subfolders, but I'm guessing "not needed" 'docs': 17 .html files, but again, I'm guessing "not needed" 'external': 4 .js files, one .css 'themes': 'base' and 'custom-theme' subfolders, each with 8 or 9 .css files and an 'images' subfolder with about a dozen images 'ui': 25 .js files, an 'i18n' subfolder with 53 .js files, and a 'minified' subfolder with 24 .js files 'js': jquery-1.4.2.min.js and jquery-ui-1.8.custom.min.js Also, the file structure. Our server is set up something like this: root admin (administrative tools) css forms (the gist of the site lives here) images include (asp code snippets that are used by multiple pages) js (just a few things right now, like an ancient wheezing spelling checker) As far as I can tell, the jQuery css files assume that (1) each theme is in its own folder, and (2) each folder has its own images subfolder. How can I convince it otherwise? i.e. put the necessary .js files in the 'js' folder, the .css files in the 'css' folder, and the images in the 'images' folder?

    Read the article

  • File uploading in asp.net permission error (Access denied to path x)

    - by Arash
    I'm trying to upload some image files in my asp.net web app. Server OS: Windows server 2003 and IIS 6 I granted write permission in IIS to root and destination folder and granted FullControl Access to this users IUSer_Mashinname, Asp.net user, network services,Everyone, and all other users to the web app root folder and upload destination folder, but there is "Access denied problem".

    Read the article

  • How to make the yuicompressor jar file a singleton, or globally accessible?

    - by Erik Vold
    I'd like to put the yuicompressor jar file in a single folder so that I can call java -jar yuicompressor-2.4.2.jar ... from anywhere on my system, with cygwin. For bash files I use I simply put them in a common folder, and added the folder's path to my windows user's PATH environment variable and the bash commands were found in cygwin. when I echo $PATH I see the folder I put the yuicompressor jar into listed in the $PATH.. But when I try java -jar yuicompressor-2.4.2.jar ... I get the following error message: Unable to access jarfile yuicompressor-2.4.2.jar Even when I try providing an absolute path to the jarfile I get the same error message.. How can I do this?

    Read the article

  • Why does Git display certain new folders when checking out old revisions?

    - by ConnorG
    Hey all - I'm still learning the ropes of Git (love it!) but the other day I noticed some behavior I just do not understand. We have, in essence, three folders that got moved into the repository at different times (one immediately after we created the repo, one a little while later, and one just recently). Recently, I had to get some code out of an old revision. I used git checkout <old SHA1 hash> to pull up one of our first checkins, when I noticed Git showed the old folder (as it should), as well as the newest folder (which got added to the repo long after the checked out commit was made). But it did not show the second folder. What would cause Git to display the newest folder with the old revision?

    Read the article

  • Issue in creating Zip file using glob.glob

    - by infosyssec
    Hi, I am creating a Zip file from a folder (and subfolders). it works fine and creates a new .zip file also but I am having an issue while using glob.glob. It is reading all files from the desired folder (source folder) and writing to the new zip file but the problem is that it is, however, adding subdirectories, but not adding files form the subdirectories. I am giving user an option to select the filename and path as well as filetype also (Zip or Tar). I don;t get any problem while creating .tar.gz file, but when use creates .zip file, this problem comes across. Here is my code: for name in (Source_Dir): for name in glob.glob("/path/to/source/dir/*" ): myZip.write(name, os.path.basename(name), zipfile.ZIP_DEFLATED) myZip.close() Also, if I use code below: for dirpath, dirnames, filenames in os.walk(Source_Dir): myZip.write(os.path.join(dirpath, filename) os.path.basename(filename)) myZip.close() Now the 2nd code taks all files even if it inside the folder/ subfolders, creates a new .zip file and write to it without any directory strucure. It even does not take dir structure for main folder and simply write all files from main dir or subdir to that .zip file. Can anyone please help me or suggest me. I would prefer glob.glob rather than the 2nd option to use. Thanks in advance. Regards, Akash

    Read the article

  • Where are Kohana config files?

    - by elmonty
    I've just installed Kohana 3.0.4.2 and I have run the index.php file successfully. According to the documentation, the next step is to edit the config files in the application/config folder. I have that folder but there are no files in it! I downloaded the package again to make sure it wasn't corrupted, but the same problem exists. Why is the application/config folder empty?

    Read the article

  • Subversion: Adding files to the project

    - by Ran
    Hi I am using library xyz where the files exists in folder xyz, and I want to update the files (eg. a upgrade to a new version), can I just copy the new xyz folder into my project using the file browser? The folder has both files and directories. /Subversion noob

    Read the article

  • Redirect Using htaccess

    - by manyxcxi
    I am trying to redirect /folder to / using .htaccess but all am I getting is the Apache HTTP Server Test Page. My root directory looks like this: / .htaccess -/folder -/folder2 -/folder3 My .htaccess looks like this: RewriteEngine On RewriteCond %{REQUEST_URI} !^/folder/ RewriteRule ^(.*)$ folder/$1 [L] What am I doing wrong? I checked my httpd.conf (I'm running Centos 5.3) and the mod_rewrite library is being loaded. As a side note, my server is not a www server, its simply a virtual machine so its hostname is centosvm. Addition: I have found that the mod_rewrite module is loaded, but none of my .htaccess redirects seem to be working.

    Read the article

  • ASP.Net MVC ActionLink's and Shared Hosting Aliased Domains

    - by Peter Meyer
    So, I've read this and I've got a similar issue: I have a shared hosting account (with GoDaddy, though that's not exactly relevant, I believe) and I've got an MVC (RC1) site deployed to a sub-folder which where I have another domain name mapped (or aliased). The sub-folder is also setup as an application root as well. The site works without issue, the problem is that I don't like the links that are being generated using Html.ActionLink and Ajax.ActionLink. It's inserting the sub folder name as part of the URL as described in this other question. Thing is, the site works fine, but I'd like to make the links generated relative to the domain name. To use an example: http://my.abc.com "primary" domain; maps to \ on file system http://my.xyz.com setup to map to \_xyz.com folder on file system My generated links on xyz.com look like this: Intended Generated -------- --------- http://my.xyz.com/Ctrller/Action/52 http://my.xyz.com/_xyz.com/Ctrller/Action/52 and, FWIW, the site works. So, the question: Is there something I can do to get rid of that folder name in the links being generated? I have a couple of brute force ideas, but they aren't too elegant.

    Read the article

  • Delete specific files after installation using visual studio setup project

    - by Vadiklk
    I have this problem. I want to build an installer for my c# solution, that will be placed in a folder with other installation folders and files that are needed to be copied to the installed folder. So that is easy, I just copy them to the folder I create using the folder structure I want. Now, I want also to install another program and run a .exe file I've created to unzip some files for me. For that I need to copy 2 .exe files and 2 dlls (for the exes) to the folder to which I am installing and create 2 custom actions that will use them. That I've managed to do. After that I want to delete those 4 extra files, as the user does not need them and shouldn't even be aware they are there. How to do so? I couldn't find a way in the built in setup project preferences + I do not know how to make a custom installer class. A bonus question, is how to make the other installer (one of the .exe files is just a plain installer) install quietly to any path? I do not want the user to see an installer pop out of my program installer. Thanks!

    Read the article

  • Storing uploaded content on a website

    - by Matt
    For the past 5 years, my typical solution for storing uploaded files (images, videos, documents, etc) was to throw everything into an "upload" folder and give it a unique name. I'm looking to refine my methods for storing uploaded content and I'm just wondering what other methods are used / preferred. I've considered storing each item in their own folder (folder name is the Id in the db) so I can preserve the uploaded file name. I've also considered uploading all media to a locked folder, then using a file handler, which you pass the Id of the file you want to download in the querystring, it would then read the file and send the bytes to the user. This is handy for checking access, and restricting bandwidth for users.

    Read the article

  • Using Office 2007 extension (i.e. docx) for skin based On-Screen keyboard.

    - by Peymankh
    Hi guys, I'm creating a On-Screen keyboard for my application, and it supports skins as well. Here's what I'm doing with the skins, I have a folder which contains some images and a xml file which maps the images to the keyboard, I want to be able to have the folder as a zip file like in Office 2007 (.docx) and iPhone firmwares (.ipsw), I know I can simply zip the folder and change the extension, what I need to know is how to read the files in the code. Thanks in advance.

    Read the article

  • General Policies and Procedures for Maintaining the Value of Data Assets

    Here is a general list for policies and procedures regarding maintaining the value of data assets. Data Backup Policies and Procedures Backups are very important when dealing with data because there is always the chance of losing data due to faulty hardware or a user activity. So the need for a strategic backup system should be mandatory for all companies. This being said, in the real world some companies that I have worked for do not really have a good data backup plan. Typically when companies tend to take this kind of approach in data backups usually the data is not really recoverable.  Unfortunately when companies do not regularly test their backup plans they get a false sense of security because they think that they are covered. However, I can tell you from personal and professional experience that a backup plan/system is never fully implemented until it is regularly tested prior to the time when it actually needs to be used. Disaster Recovery Plan Expanding on Backup Policies and Procedures, a company needs to also have a disaster recovery plan in order to protect its data in case of a catastrophic disaster.  Disaster recovery plans typically encompass how to restore all of a company’s data and infrastructure back to a restored operational status.  Most Disaster recovery plans also include time estimates on how long each step of the disaster recovery plan should take to be executed.  It is important to note that disaster recovery plans are never fully implemented until they have been tested just like backup plans. Disaster recovery plans should be tested regularly so that the business can be confident in not losing any or minimal data due to a catastrophic disaster. Firewall Policies and Content Filters One way companies can protect their data is by using a firewall to separate their internal network from the outside. Firewalls allow for enabling or disabling network access as data passes through it by applying various defined restrictions. Furthermore firewalls can also be used to prevent access from the internal network to the outside by these same factors. Common Firewall Restrictions Destination/Sender IP Address Destination/Sender Host Names Domain Names Network Ports Companies can also desire to restrict what their network user’s view on the internet through things like content filters. Content filters allow a company to track what webpages a person has accessed and can also restrict user’s access based on established rules set up in the content filter. This device and/or software can block access to domains or specific URLs based on a few factors. Common Content Filter Criteria Known malicious sites Specific Page Content Page Content Theme  Anti-Virus/Mal-ware Polices Fortunately, most companies utilize antivirus programs on all computers and servers for good reason, virus have been known to do the following: Corrupt/Invalidate Data, Destroy Data, and Steal Data. Anti-Virus applications are a great way to prevent any malicious application from being able to gain access to a company’s data.  However, anti-virus programs must be constantly updated because new viruses are always being created, and the anti-virus vendors need to distribute updates to their applications so that they can catch and remove them. Data Validation Policies and Procedures Data validation is very important to ensure that only accurate information is stored. The existence of invalid data can cause major problems when businesses attempt to use data for knowledge based decisions and for performance reporting. Data Scrubbing Policies and Procedures Data scrubbing is valuable to companies in one of two ways. The first can be used to clean data prior to being analyzed for report generation. The second is that it allows companies to remove things like personally Identifiable information from its data prior to transmit it between multiple environments or if the information is sent to an external location. An example of this can be seen with medical records in regards to HIPPA laws that prohibit the storage of specific personal and medical information. Additionally, I have professionally run in to a scenario where the Canadian government does not allow any Canadian’s personal information to be stored on a server not located in Canada. Encryption Practices The use of encryption is very valuable when a company needs to any personal information. This allows users with the appropriated access levels to view or confirm the existence or accuracy of data within a system by either decrypting the information or encrypting a piece of data and comparing it to the stored version.  Additionally, if for some unforeseen reason the data got in to the wrong hands then they would have to first decrypt the data before they could even be able to read it. Encryption just adds and additional layer of protection around data itself. Standard Normalization Practices The use of standard data normalization practices is very important when dealing with data because it can prevent allot of potential issues by eliminating the potential for unnecessary data duplication. Issues caused by data duplication include excess use of data storage, increased chance for invalidated data, and over use of data processing. Network and Database Security/Access Policies Every company has some form of network/data access policy even if they have none. These policies help secure data from being seen by inappropriate users along with preventing the data from being updated or deleted by users. In addition, without a good security policy there is a large potential for data to be corrupted by unassuming users or even stolen. Data Storage Policies Data storage polices are very important depending on how they are implemented especially when a company is trying to utilize them in conjunction with other policies like Data Backups. I have worked at companies where all network user folders are constantly backed up, and if a user wanted to ensure the existence of a piece of data in the form of a file then they had to store that file in their network folder. Conversely, I have also worked in places where when a user logs on or off of the network there entire user profile is backed up. Training Policies One of the biggest ways to prevent data loss and ensure that data will remain a company asset is through training. The practice of properly train employees on how to work with in systems that access data is crucial when trying to ensure a company’s data will remain an asset. Users need to be trained on how to manipulate a company’s data in order to perform their tasks to reduce the chances of invalidating data.

    Read the article

  • setting source classpath in eclipse

    - by lisak
    What do you guys do, when you have huge project built with ant for instance, where the source folders are right bellow the root project folder for building classpath from source files ? putting entire project as a source folder is nonsense. Putting separate folders as source folders can't be done if they are part of the package hierarchy and the only thing I could think of, is to copy the source folders into a separate folder and add it then as source folder which is weird but I don't know how else to do it. Having to duplicate sources just because of the eclipse way of making classpath and also because of somebody doing stupid project structure

    Read the article

  • Effectiveness and Efficiency

    - by Daniel Moth
    In the professional environment, i.e. at work, I am always seeking personal growth and to be challenged. The result is that my assignments, my work list, my tasks, my goals, my commitments, my [insert whatever word resonates with you] keep growing (in scope and desired impact). Which in turn means I have to keep finding new ways to deliver more value, while not falling into the trap of working more hours. To do that I continuously evaluate both my effectiveness and my efficiency. EFFECTIVENESS The first thing I check is my effectiveness: Am I doing the right things? Am I focusing too much on unimportant things? Am I spending more time doing stuff that is important to my team/org/division/business/company, or am I spending it on stuff that is important to me and that I enjoy doing? Am I valuing activities that maybe I have outgrown and should be delegated to others who are at a stage I have surpassed (in Microsoft speak: is the work I am doing level appropriate or am I still operating at the previous level)? Notice how the answers to those questions change over time and due to certain events, so I have to remind myself to revisit them frequently. Events that force me to re-examine them are: change of role, change of team/org/etc, change of direction of team/org/etc, re-org, new hires on the team that take on some of the work I did, personal promotion, change of manager... and if none of those events has occurred since the last annual review, I ask myself those at each annual review anyway. If you think you are not being effective at work, make a list of the stuff that you do and start tracking where your time goes. In parallel, have a discussion with your manager about where they think your time should go. Ultimately your time is finite and hence it is your most precious investment, don't waste it. If your management doesn't value as highly what you spend your time on, then either convince your management, or stop spending your time on it, or find different management: Lead, Follow, or get out of the way! That's my view on effectiveness. You have to fix that before moving to being efficient, or you may end up being very efficient at stuff that nobody wants you to be doing in the first place. For example, you may be spending your time writing blog posts and becoming better and faster at it all the time. If your manager thinks that is not even part of your job description, you are wasting your time to satisfy your inner desires. Nobody can help you with your effectiveness other than your management chain and your management peers - they are the judges of it. EFFICIENCY The second thing I check is my efficiency: Am I doing things right? For me, doing things right means that I deliver the same quality of work faster [than what I used to, and than my peers, and than expected of me]. The result is that I can achieve more [than what I used to, and than my peers, and than expected of me]. Notice how the efficiency goal is a more portable one. If, by whatever criteria, you think you are the best at [insert your own skill here], this can change at two events: because you have new colleagues (who are potentially better than your older ones), and it can change with a change of manager (who has potentially higher expectations). That's about it. Once you are efficient at something, you carry that with you... All you need to really be doing here is, when taking on new kinds of work that you haven't done before, try a few approaches and devise a system so that you can become efficient at this new activity too... Just keep "collecting" stuff that you are efficient at. If you think you are not being efficient at something, break it down: What are the steps you take to complete that task? How long do you spend on each step? Talk to others about what steps they take, to see if you can optimize some steps away or trade them for better steps, or just learn how to complete a step faster. Have a system for every task you take so that you can have repeatable success. That's my view on efficiency. You have to fix it so that you can free up time to do more. When you plan a route from A to B - all else being equal - you try to get there as fast as possible so why would you not want to do that with your everyday work? For example, imagine you are inefficient at processing email: You spend more time than necessary dealing with email, and you still end up with dropped email threads and with slower response times than others. How can you improve? Talk to someone that you think is good at this, understand their system (e.g. here is my email processing system) and come up with one that works for you. Parting Thoughts Are you considered, by your colleagues and manager, an effective and efficient person at your workplace? If you are, what would you change if you were asked by your management to do the job of two people? Seriously, think about that! Your immediate reaction may be "that is not possible", but it actually is. You just have to re-assess what things that were previously important will now stop being important, by discussing them with your management and reaching agreement on relative priorities. For example, stuff that was previously on your plate may now have to be delegated or dropped. Where you thought you were efficient, maybe now you have to find an even faster path to completion, perhaps keeping in mind that Perfect is the Enemy of “Good Enough”. My personal experience (from both observing others and from my own reflection) is that when folks are struggling to keep up at work it is because of two reasons: They are investing energy in stuff that they enjoy doing which the business regards as having a lower priority than a lot of other things on their plate. They are completing tasks to a level of higher quality than what is required (due to personal pride) missing the big picture which almost always mandates completing three tasks at good enough quality than knocking only one of them out of the park while the other two come in late or not at all. There is a lot of content on the web, so I strongly encourage you to use your favorite search engine to read other views on effectiveness and efficiency (Bing, Google). Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • How do I unpublish a .Net application

    - by Peggy A
    I published a C# .net application to the wrong folder. I am using VS 2005. How do I unpublish the app to be able to republish in the correct folder. I tried simply publishing to another folder and now the app will not run from either location.

    Read the article

  • umbraco front end site stopped working suddenly

    - by Srilakshmi
    Hi All, I created one webapplication and placed the default.aspx page in the root folder of the umbraco (i.e., httpdocs folder) and the application dll into the bin folder. I used the name “Default.aspx” as the other names are not working. Now the issue is all the pages are redirecting to the default.aspx page (I haven’t made any config changes anywhere in the umbraco setup) I found this root cause and removed the default.aspx page and its respective dll from the bin folder. The resource cannot be found. Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly. Requested URL: /default.aspx I stuck up here and struggling to resolve it.Please help me out on this THanks, Srilakshmi

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >