Search Results

Search found 68536 results on 2742 pages for 'pst file'.

Page 623/2742 | < Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >

  • IIS: 404 error on every file in a virtual directory.

    - by Scott Chamberlain
    I am trying to write my first WCF service for IIS 6.0. I followed the instructions on MSDN. I created the virtual directory, I can browse the directory fine but anything I click (even a sub-folder in that folder) gives me a 404 error. What am I missing that I can not access any files or folders? Any logs or whatnot you need just tell me where to find them in the comments and I will post them. UPDATE- Found the log, here is what it says when I connect and try to click on a sub folder. #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2010-03-07 19:08:07 #Fields: date time s-sitename s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status 2010-03-07 19:08:07 W3SVC1 74.62.95.101 GET /prx2.php hash=AA70CBCE8DDD370B4A3E5F6500505C6FBA530220D856 80 - 221.192.199.35 Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.0) 404 0 2 #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2010-03-07 22:21:20 #Fields: date time s-sitename s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status 2010-03-07 22:21:20 W3SVC1 127.0.0.1 GET /RemoteUserManagerService/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+3.0.04506.30;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.648;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 401 2 2148074254 2010-03-07 22:21:26 W3SVC1 127.0.0.1 GET /RemoteUserManagerService/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+3.0.04506.30;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.648;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 401 1 0 2010-03-07 22:21:26 W3SVC1 127.0.0.1 GET /RemoteUserManagerService/ - 80 webinfinity\srchamberlain 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+3.0.04506.30;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.648;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 200 0 0 2010-03-07 22:21:29 W3SVC1 127.0.0.1 GET /RemoteUserManagerService/bin/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+3.0.04506.30;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.648;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 404 0 2

    Read the article

  • Can the .htaccess file slow down a website to a crawl? If so, are there better ways to solve these problems with different rewrite rules and such?

    - by Parimal
    here is my htaccess file...... RewriteCond %{REQUEST_URI} ^/patients/billing/FAQ_billing\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/billing/getintouch\.html$ RewriteRule ^patients/billing/(.*)\.html$ $1.php [L,NC] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/a\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/b\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/c\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/d\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/e\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/f\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/g\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/h\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/i\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/j\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/k\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/l\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/m\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/n\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/o\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/p\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/q\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/r\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/s\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/t\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/u\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/v\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/w\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/x\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/y\.html$ [OR] RewriteCond %{REQUEST_URI} ^/patients/findadoctor/z\.html$ RewriteRule ^patients/findadoctor/(.*)\.html$ findadoctor.php?id=$1 [L,NC] like that there is lots of rules around 250 line please help me...

    Read the article

  • nagios NRPE: Unable to read output

    - by user555854
    I currently set up a script to restart my http servers + php5 fpm but can't get it to work. I have googled and have found that mostly permissions are the problems of my error but can't figure it out. I start my script using /usr/lib/nagios/plugins/check_nrpe -H bart -c restart_http This is the output in my syslog on the node I want to restart Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 port 25028 Jun 27 06:29:35 bart nrpe[8926]: Host address is in allowed_hosts Jun 27 06:29:35 bart nrpe[8926]: Handling the connection... Jun 27 06:29:35 bart nrpe[8926]: Host is asking for command 'restart_http' to be run... Jun 27 06:29:35 bart nrpe[8926]: Running command: /usr/bin/sudo /usr/lib/nagios/plugins/http-restart Jun 27 06:29:35 bart nrpe[8926]: Command completed with return code 1 and output: Jun 27 06:29:35 bart nrpe[8926]: Return Code: 1, Output: NRPE: Unable to read output Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 closed. If I run the command myself it runs fine (but asks for a password) (nagios user) This are the script permission and the script contents. -rwxrwxrwx 1 nagios nagios 142 Jun 26 21:41 /usr/lib/nagios/plugins/http-restart #!/bin/bash echo "ok" /etc/init.d/nginx stop /etc/init.d/nginx start /etc/init.d/php5-fpm stop /etc/init.d/php5-fpm start echo "done" I also added this line to visudo nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ My local nagios nrpe.cfg ############################################################################# # Sample NRPE Config File # Written by: Ethan Galstad ([email protected]) # # # NOTES: # This is a sample configuration file for the NRPE daemon. It needs to be # located on the remote host that is running the NRPE daemon, not the host # from which the check_nrpe client is being executed. ############################################################################# # LOG FACILITY # The syslog facility that should be used for logging purposes. log_facility=daemon # PID FILE # The name of the file in which the NRPE daemon should write it's process ID # number. The file is only written if the NRPE daemon is started by the root # user and is running in standalone mode. pid_file=/var/run/nagios/nrpe.pid # PORT NUMBER # Port number we should wait for connections on. # NOTE: This must be a non-priviledged port (i.e. > 1024). # NOTE: This option is ignored if NRPE is running under either inetd or xinetd server_port=5666 # SERVER ADDRESS # Address that nrpe should bind to in case there are more than one interface # and you do not want nrpe to bind on all interfaces. # NOTE: This option is ignored if NRPE is running under either inetd or xinetd #server_address=127.0.0.1 # NRPE USER # This determines the effective user that the NRPE daemon should run as. # You can either supply a username or a UID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_user=nagios # NRPE GROUP # This determines the effective group that the NRPE daemon should run as. # You can either supply a group name or a GID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_group=nagios # ALLOWED HOST ADDRESSES # This is an optional comma-delimited list of IP address or hostnames # that are allowed to talk to the NRPE daemon. # # Note: The daemon only does rudimentary checking of the client's IP # address. I would highly recommend adding entries in your /etc/hosts.allow # file to allow only the specified host to connect to the port # you are running this daemon on. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd allowed_hosts=127.0.0.1,192.168.133.17 # COMMAND ARGUMENT PROCESSING # This option determines whether or not the NRPE daemon will allow clients # to specify arguments to commands that are executed. This option only works # if the daemon was configured with the --enable-command-args configure script # option. # # *** ENABLING THIS OPTION IS A SECURITY RISK! *** # Read the SECURITY file for information on some of the security implications # of enabling this variable. # # Values: 0=do not allow arguments, 1=allow command arguments dont_blame_nrpe=0 # COMMAND PREFIX # This option allows you to prefix all commands with a user-defined string. # A space is automatically added between the specified prefix string and the # command line from the command definition. # # *** THIS EXAMPLE MAY POSE A POTENTIAL SECURITY RISK, SO USE WITH CAUTION! *** # Usage scenario: # Execute restricted commmands using sudo. For this to work, you need to add # the nagios user to your /etc/sudoers. An example entry for alllowing # execution of the plugins from might be: # # nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # # This lets the nagios user run all commands in that directory (and only them) # without asking for a password. If you do this, make sure you don't give # random users write access to that directory or its contents! command_prefix=/usr/bin/sudo # DEBUGGING OPTION # This option determines whether or not debugging messages are logged to the # syslog facility. # Values: 0=debugging off, 1=debugging on debug=1 # COMMAND TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # allow plugins to finish executing before killing them off. command_timeout=60 # CONNECTION TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # wait for a connection to be established before exiting. This is sometimes # seen where a network problem stops the SSL being established even though # all network sessions are connected. This causes the nrpe daemons to # accumulate, eating system resources. Do not set this too low. connection_timeout=300 # WEEK RANDOM SEED OPTION # This directive allows you to use SSL even if your system does not have # a /dev/random or /dev/urandom (on purpose or because the necessary patches # were not applied). The random number generator will be seeded from a file # which is either a file pointed to by the environment valiable $RANDFILE # or $HOME/.rnd. If neither exists, the pseudo random number generator will # be initialized and a warning will be issued. # Values: 0=only seed from /dev/[u]random, 1=also seed from weak randomness #allow_weak_random_seed=1 # INCLUDE CONFIG FILE # This directive allows you to include definitions from an external config file. #include=<somefile.cfg> # INCLUDE CONFIG DIRECTORY # This directive allows you to include definitions from config files (with a # .cfg extension) in one or more directories (with recursion). #include_dir=<somedirectory> #include_dir=<someotherdirectory> # COMMAND DEFINITIONS # Command definitions that this daemon will run. Definitions # are in the following format: # # command[<command_name>]=<command_line> # # When the daemon receives a request to return the results of <command_name> # it will execute the command specified by the <command_line> argument. # # Unlike Nagios, the command line cannot contain macros - it must be # typed exactly as it should be executed. # # Note: Any plugins that are used in the command lines must reside # on the machine that this daemon is running on! The examples below # assume that you have plugins installed in a /usr/local/nagios/libexec # directory. Also note that you will have to modify the definitions below # to match the argument format the plugins expect. Remember, these are # examples only! # The following examples use hardcoded command arguments... command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10 command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1 command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200 # The following examples allow user-supplied arguments and can # only be used if the NRPE daemon was compiled with support for # command arguments *AND* the dont_blame_nrpe directive in this # config file is set to '1'. This poses a potential security risk, so # make sure you read the SECURITY file before doing this. #command[check_users]=/usr/lib/nagios/plugins/check_users -w $ARG1$ -c $ARG2$ #command[check_load]=/usr/lib/nagios/plugins/check_load -w $ARG1$ -c $ARG2$ #command[check_disk]=/usr/lib/nagios/plugins/check_disk -w $ARG1$ -c $ARG2$ -p $ARG3$ #command[check_procs]=/usr/lib/nagios/plugins/check_procs -w $ARG1$ -c $ARG2$ -s $ARG3$ command[restart_http]=/usr/lib/nagios/plugins/http-restart # # local configuration: # if you'd prefer, you can instead place directives here include=/etc/nagios/nrpe_local.cfg # # you can place your config snipplets into nrpe.d/ include_dir=/etc/nagios/nrpe.d/ My Sudoers files # /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # See the man page for details on how to write a sudoers file. # Defaults env_reset # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification root ALL=(ALL) ALL nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # Allow members of group sudo to execute any command # (Note that later entries override this, so you might need to move # it further down) %sudo ALL=(ALL) ALL # #includedir /etc/sudoers.d Hopefully someone can help!

    Read the article

  • Slow local file transfer (copy) on ESX vmware server?

    - by Sorin Sbarnea
    I have a 8 CPU VmWare ESX server (3.5) with 4 HDD drives in RAID that is not loaded at all. I enabled SSH and installed mc (midnight commander) in order to be able to copy(clone) virtual machines but I observed that if does copy the files very slow - around 3.5mb/s on local drive. Why is this happening and how should I solve the issue?

    Read the article

  • Use an Ubuntu Live CD to Securely Wipe Your PC’s Hard Drive

    - by Trevor Bekolay
    Deleting files or quickly formatting a drive isn’t enough for sensitive personal information. We’ll show you how to get rid of it for good using a Ubuntu Live CD. When you delete a file in Windows, Ubuntu, or any other operating system, it doesn’t actually destroy the data stored on your hard drive, it just marks that data as “deleted.” If you overwrite it later, then that data is generally unrecoverable, but if the operating system don’t happen to overwrite it, then your data is still stored on your hard drive, recoverable by anyone who has the right software. By securely delete files or entire hard drives, your data will be gone for good. Note: Modern hard drives are extremely sophisticated, as are the experts who recover data for a living. There is no guarantee that the methods covered in this article will make your data completely unrecoverable; however, they will make your data unrecoverable to the majority of recovery methods, and all methods that are readily available to the general public. Shred individual files Most of the data stored on your hard drive is harmless, and doesn’t reveal anything about you. If there are just a few files that you know you don’t want someone else to see, then the easiest way to get rid of them is a built-in Linux utility called shred. Open a terminal window by clicking on Applications at the top-left of the screen, then expanding the Accessories menu and clicking on Terminal. Navigate to the file that you want to delete using cd to change directories and ls to list the files and folders in the current directory. As an example, we’ve got a file called BankInfo.txt on a Windows NTFS-formatted hard drive. We want to delete it securely, so we’ll call shred by entering the following in the terminal window: shred <file> which is, in our example: shred BankInfo.txt Notice that our BankInfo.txt file still exists, even though we’ve shredded it. A quick look at the contents of BankInfo.txt make it obvious that the file has indeed been securely overwritten. We can use some command-line arguments to make shred delete the file from the hard drive as well. We can also be extra-careful about the shredding process by upping the number of times shred overwrites the original file. To do this, in the terminal, type in: shred –remove –iterations=<num> <file> By default, shred overwrites the file 25 times. We’ll double this, giving us the following command: shred –remove –iterations=50 BankInfo.txt BankInfo.txt has now been securely wiped on the physical disk, and also no longer shows up in the directory listing. Repeat this process for any sensitive files on your hard drive! Wipe entire hard drives If you’re disposing of an old hard drive, or giving it to someone else, then you might instead want to wipe your entire hard drive. shred can be invoked on hard drives, but on modern file systems, the shred process may be reversible. We’ll use the program wipe to securely delete all of the data on a hard drive. Unlike shred, wipe is not included in Ubuntu by default, so we have to install it. Open up the Synaptic Package Manager by clicking on System in the top-left corner of the screen, then expanding the Administration folder and clicking on Synaptic Package Manager. wipe is part of the Universe repository, which is not enabled by default. We’ll enable it by clicking on Settings > Repositories in the Synaptic Package Manager window. Check the checkbox next to “Community-maintained Open Source software (universe)”. Click Close. You’ll need to reload Synaptic’s package list. Click on the Reload button in the main Synaptic Package Manager window. Once the package list has been reloaded, the text over the search field will change to “Rebuilding search index”. Wait until it reads “Quick search,” and then type “wipe” into the search field. The wipe package should come up, along with some other packages that perform similar functions. Click on the checkbox to the left of the label “wipe” and select “Mark for Installation”. Click on the Apply button to start the installation process. Click the Apply button on the Summary window that pops up. Once the installation is done, click the Close button and close the Synaptic Package Manager window. Open a terminal window by clicking on Applications in the top-left of the screen, then Accessories > Terminal. You need to figure our the correct hard drive to wipe. If you wipe the wrong hard drive, that data will not be recoverable, so exercise caution! In the terminal window, type in: sudo fdisk -l A list of your hard drives will show up. A few factors will help you identify the right hard drive. One is the file system, found in the System column of  the list – Windows hard drives are usually formatted as NTFS (which shows up as HPFS/NTFS). Another good identifier is the size of the hard drive, which appears after its identifier (highlighted in the following screenshot). In our case, the hard drive we want to wipe is only around 1 GB large, and is formatted as NTFS. We make a note of the label found under the the Device column heading. If you have multiple partitions on this hard drive, then there will be more than one device in this list. The wipe developers recommend wiping each partition separately. To start the wiping process, type the following into the terminal: sudo wipe <device label> In our case, this is: sudo wipe /dev/sda1 Again, exercise caution – this is the point of no return! Your hard drive will be completely wiped. It may take some time to complete, depending on the size of the drive you’re wiping. Conclusion If you have sensitive information on your hard drive – and chances are you probably do – then it’s a good idea to securely delete sensitive files before you give away or dispose of your hard drive. The most secure way to delete your data is with a few swings of a hammer, but shred and wipe from a Ubuntu Live CD is a good alternative! Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDScan a Windows PC for Viruses from a Ubuntu Live CDRecover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDCreate a Bootable Ubuntu 9.10 USB Flash DriveCreate a Bootable Ubuntu USB Flash Drive the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar

    Read the article

  • Dynamic Unpivot : SSIS Nugget

    - by jamiet
    A question on the SSIS forum earlier today asked: I need to dynamically unpivot some set of columns in my source file. Every month there is one new column and its set of Values. I want to unpivot it without editing my SSIS packages that is deployed Let’s be clear about what we mean by Unpivot. It is a normalisation technique that basically converts columns into rows. By way of example it converts something like this: AccountCode Jan Feb Mar AC1 100.00 150.00 125.00 AC2 45.00 75.50 90.00 into something like this: AccountCode Month Amount AC1 Jan 100.00 AC1 Feb 150.00 AC1 Mar 125.00 AC2 Jan 45.00 AC2 Feb 75.50 AC2 Mar 90.00 The Unpivot transformation in SSIS is perfectly capable of carrying out the operation defined in this example however in the case outlined in the aforementioned forum thread the problem was a little bit different. I interpreted it to mean that the number of columns could change and in that scenario the Unpivot transformation (and indeed the SSIS dataflow in general) is rendered useless because it expects that the number of columns will not change from what is specified at design-time. There is a workaround however. Assuming all of the columns that CAN exist will appear at the end of the rows, we can (1) import all of the columns in the file as just a single column, (2) use a script component to loop over all the values in that “column” and (3) output each one as a column all of its own. Let’s go over that in a bit more detail.   I’ve prepared a data file that shows some data that we want to unpivot which shows some customers and their mythical shopping lists (it has column names in the first row): We use a Flat File Connection Manager to specify the format of our data file to SSIS: and a Flat File Source Adapter to put it into the dataflow (no need a for a screenshot of that one – its very basic). Notice that the values that we want to unpivot all exist in a column called [Groceries]. Now onto the script component where the real work goes on, although the code is pretty simple: Here I show a screenshot of this executing along with some data viewers. As you can see we have successfully pulled out all of the values into a row all of their own thus accomplishing the Dynamic Unpivot that the forum poster was after. If you want to run the demo for yourself then I have uploaded the demo package and source file up to my SkyDrive: http://cid-550f681dad532637.skydrive.live.com/self.aspx/Public/BlogShare/20100529/Dynamic%20Unpivot.zip Simply extract the two files into a folder, make sure the Connection Manager is pointing to the file, and execute! Hope this is useful. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL SERVER – Fix: Error: File cannot be loaded because the execution of scripts is disabled on this system. Please see “get-help about_signing” for more details

    - by pinaldave
    Yesterday I formatted my computer and did fresh install as it was due from long time. After the fresh install when I tried to install Semantic Search application using powershell, I was stopped by following error. File cannot be loaded because the execution of scripts is disabled on this system. Please see “get-help about_signing” for more details Fix/Solution/Workaround: The solution is very simple. Open the Powershell window and type following two lines and everything will fine right after that. Set-ExecutionPolicy Unrestricted Set-ExecutionPolicy RemoteSigned Again, this is I have done for my environment where I am very careful what I will run. You can change the policy back to original restricted policy if you want to restrict future execution of the powershell scripts. Simple – isn’t it? Well all complex looking problems are very simple to solve. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Powershell

    Read the article

  • Unable to ssh out anywhere - ssh_exchange_identification

    - by Chowlett
    I have a setup where I'm running Ubuntu 11.10 as a VirtualBox guest under a Windows 7 host, behind a restrictive corporate firewall. I have set up NAT from the host port 22 to Ubuntu's port 22; IT inform me that they have opened port 22 outbound for the host machine's IP address. I have run ssh-keygen -t rsa, and am trying to test the setup by connecting to github and another known ssh server. In both cases the connect is refused with ssh_exchange_identification: Connection closed by remote host. Full -vvv log is below. Is this possibly still due to the corporate firewall? If so, what else might I need to request from them? Any other ideas what might be wrong and how to fix it? ~$ ssh -Tvvv [email protected] OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to github.com [207.97.227.239] port 22. debug1: Connection established. debug3: Incorrect RSA1 identifier debug3: Could not load "/home/chris/.ssh/id_rsa" as a RSA1 public key debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'Proc-Type:' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'DEK-Info:' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/chris/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file /home/chris/.ssh/id_rsa-cert type -1 debug1: identity file /home/chris/.ssh/id_dsa type -1 debug1: identity file /home/chris/.ssh/id_dsa-cert type -1 debug1: identity file /home/chris/.ssh/id_ecdsa type -1 debug1: identity file /home/chris/.ssh/id_ecdsa-cert type -1 ssh_exchange_identification: Connection closed by remote host Edit: Requested diagnostics: ~$ ls -la ~/.ssh total 16 drwx------ 2 chris chris 4096 2012-03-30 13:12 . drwxr-xr-x 29 chris chris 4096 2012-03-30 13:25 .. -rw------- 1 chris chris 1766 2012-03-30 13:12 id_rsa -rw-r--r-- 1 chris chris 409 2012-03-30 13:12 id_rsa.pub

    Read the article

  • Cannot Import VPN connection

    - by ECII
    Since 12.04 I cannot connect to my VPN. my ovpn file is the following http://email.uoa.gr/help/download/vpn/edunet.ovpn When I try to import the VPN file i get the following error The file 'edunet.ovpn' could not be read or does not contain recognized VPN connection information Error: unknown PPTP file extension. Is there any way arround this error? I have already installed network-manager-openvpn

    Read the article

  • How can I clean up my bashrc/zshrc file?

    - by LuxuryMode
    Over time, I've added bunches of stuff to my PATH and it's lookin' pretty awful. How can I clean this up or what's the proper way to "reformat" all of this? export PATH="$PATH:~/scripts" export PATH="$PATH:~/Downloads/android-sdk-mac_x86/platform-tools/adb" export PATH=/opt/local/bin:/opt/local/sbin:$PATH export PATH="$PATH:~/Downloads/android-sdk-mac_x86/platform-tools:~/Downloads/android-sdk-mac_x86/tools:~/Downloads/android-sdk-mac_x86/platform-tools/adb" export PATH="$PATH:~/bin" export PATH="$PATH:~/bin/subl" export PATH="$PATH:~/.rvm/gems/ruby-1.9.3-head/gems/git-media-0.1.1/bin" export PATH=$PATH:$HOME/bin:/Users/me/Downloads/android-sdk-mac_86/tools export PATH=$PATH:$HOME/bin:/Users/me/Downloads/android-sdk-mac_86/platform-tools export PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/X11/bin:/.rvm/scripts/rvm:/.rvm/scripts/rvm:/~/Downloads/android-sdk-mac_x86/tools/android:/~/Downloads/android-ndk-r7/:/~/Downloads/android-sdk-mac_x86/platform-tools export CC=gcc-4.2 export PATH=~/Downloads/android-ndk-r7:$PATH ANDROID_HOME=~/Downloads/android-sdk-mac_x86 export PATH=${PATH}:$ANDROIDHOME/platform-tools

    Read the article

  • SSIS Design Pattern: Loading Variable-Length Rows

    - by andyleonard
    Introduction I encounter flat file sources with variable-length rows on occassion. Here, I supply one SSIS Design Pattern for loading them. What's a Variable-Length Row Flat File? Great question - let's start with a definition. A variable-length row flat file is a text source of some flavor - comma-separated values (CSV), tab-delimited file (TDF), or even fixed-length, positional-, or ordinal-based (where the location of the data on the row defines its field). The major difference between a "normal"...(read more)

    Read the article

  • Web.Config is Cached

    - by SGWellens
    There was a question from a student over on the Asp.Net forums about improving site performance. The concern was that every time an app setting was read from the Web.Config file, the disk would be accessed. With many app settings and many users, it was believed performance would suffer. Their intent was to create a class to hold all the settings, instantiate it and fill it from the Web.Config file on startup. Then, all the settings would be in RAM. I knew this was not correct and didn't want to just say so without any corroboration, so I did some searching. Surprisingly, this is a common misconception. I found other code postings that cached the app settings from Web.Config. Many people even thanked the posters for the code. In a later post, the student said their text book recommended caching the Web.Config file. OK, here's the deal. The Web.Config file is already cached. You do not need to re-cache it. From this article http://msdn.microsoft.com/en-us/library/aa478432.aspx It is important to realize that the entire <appSettings> section is read, parsed, and cached the first time we retrieve a setting value. From that point forward, all requests for setting values come from an in-memory cache, so access is quite fast and doesn't incur any subsequent overhead for accessing the file or parsing the XML. The reason the misconception is prevalent may be because it's hard to search for Web.Config and cache without getting a lot of hits on how to setup caching in the Web.Config file. So here's a string for search engines to index on: "Is the Web.Config file Cached?" A follow up question was, are the connection strings cached? Yes. http://msdn.microsoft.com/en-us/library/ms178683.aspx At run time, ASP.NET uses the Web.Config files to hierarchically compute a unique collection of configuration settings for each incoming URL request. These settings are calculated only once and then cached on the server. And, as everyone should know, if you modify the Web.Config file, the web application will restart. I hope this helps people to NOT write code! Steve WellensCodeProject

    Read the article

  • Web.Config is Cached

    - by SGWellens
    There was a question from a student over on the Asp.Net forums about improving site performance. The concern was that every time an app setting was read from the Web.Config file, the disk would be accessed. With many app settings and many users, it was believed performance would suffer. Their intent was to create a class to hold all the settings, instantiate it and fill it from the Web.Config file on startup. Then, all the settings would be in RAM. I knew this was not correct and didn't want to just say so without any corroboration, so I did some searching. Surprisingly, this is a common misconception. I found other code postings that cached the app settings from Web.Config. Many people even thanked the posters for the code. In a later post, the student said their text book recommended caching the Web.Config file. OK, here's the deal. The Web.Config file is already cached. You do not need to re-cache it. From this article http://msdn.microsoft.com/en-us/library/aa478432.aspx It is important to realize that the entire <appSettings> section is read, parsed, and cached the first time we retrieve a setting value. From that point forward, all requests for setting values come from an in-memory cache, so access is quite fast and doesn't incur any subsequent overhead for accessing the file or parsing the XML. The reason the misconception is prevalent may be because it's hard to search for Web.Config and cache without getting a lot of hits on how to setup caching in the Web.Config file. So here's a string for search engines to index on: "Is the Web.Config file Cached?" A follow up question was, are the connection strings cached? Yes. http://msdn.microsoft.com/en-us/library/ms178683.aspx At run time, ASP.NET uses the Web.Config files to hierarchically compute a unique collection of configuration settings for each incoming URL request. These settings are calculated only once and then cached on the server. And, as everyone should know, if you modify the Web.Config file, the web application will restart. I hope this helps people to NOT write code!   Steve WellensCodeProject

    Read the article

  • Any Recommendations for a Web Based Large File Transfer System?

    - by Glen Richards
    I'm looking for a server software product that: Allows my users to share large files with: The general public securely to 1 or more people (notification via email, optionally with a token that gives them x period of time to download) Allows anyone in the general public to share files with my users. Perhaps by invitation. Has to be user friendly enough to allow my users to use this with out having to bug me as the admin. It needs to be a system that we can install on our own server (we don't want shared data sitting on anyone else's server) A web based solution. Using some kind or secure comms channel would be good too, eg, ssh Files to share could be over 1 GB. I found the question below. WebDav does not sound user friendly enough: http://serverfault.com/questions/86878/recommendations-for-a-secure-and-simple-dropbox-system I've done a lot of searching, but I can't get the search terms right. There are too many services that provide this, but I want something we can install on our own server. A last resort would be to roll my own. Any ideas appreciated. Glen EDIT Sorry Tom and Jeff but Glen specifically says that he's looking for a 'product' so given that I specialise in this field thought that my expertise in this area may have been of use to him. I don't see how him writing services is going to be easy for him to maintain going forward (large IT admin overhead) or simple for his users and the general public to work with.

    Read the article

  • Ubuntu 13.10 unity won't load from this morning

    - by user287957
    I turned on my pc this morning and unity will not load at all. I have tried loading it manually using ctrl+alt+f1 and all i got from it was the following:- compiz (core) - Info: Loading plugin: core compiz (core) - Info: Starting plugin: core compiz (core) - Info: Loading plugin: ccp compiz (core) - Info: Starting plugin: ccp compizconfig - Info: Backend : gsettings compizconfig - Info: Integration : true compizconfig - Info: Profile : unity compiz (core) - Info: Loading plugin: composite compiz (core) - Info: Starting plugin: composite compiz (core) - Info: Loading plugin: opengl compiz (core) - Info: Starting plugin: opengl libGL error: dlopen /usr/lib/x86_64-linux-gnu/dri/r600_dri.so failed (/usr/lib/x86_64- linux-gnu/dri/r600_dri.so: undefined symbol: _glapi_tls_Dispatch) libGL error: dlopen ${ORIGIN}/dri/r600_dri.so failed (${ORIGIN}/dri/r600_dri.so: cannot open shared object file: No such file or directory) libGL error: dlopen /usr/lib/dri/r600_dri.so failed (/usr/lib/dri/r600_dri.so: cannot open shared object file: No such file or directory) libGL error: unable to load driver: r600_dri.so libGL error: driver pointer missing libGL error: failed to load driver: r600 libGL error: dlopen /usr/lib/x86_64-linux-gnu/dri/swrast_dri.so failed (/usr/lib/x86_64-linux-gnu/dri/swrast_dri.so: undefined symbol: _glapi_tls_Dispatch) libGL error: dlopen ${ORIGIN}/dri/swrast_dri.so failed (${ORIGIN}/dri/swrast_dri.so: cannot open shared object file: No such file or directory) libGL error: dlopen /usr/lib/dri/swrast_dri.so failed (/usr/lib/dri/swrast_dri.so: cannot open shared object file: No such file or directory) libGL error: unable to load driver: swrast_dri.so libGL error: failed to load driver: swrast compiz (core) - Info: Loading plugin: compiztoolbox compiz (core) - Info: Starting plugin: compiztoolbox compiz (core) - Info: Loading plugin: decor compiz (core) - Info: Starting plugin: decor compiz (core) - Info: Loading plugin: copytex compiz (core) - Info: Starting plugin: copytex compiz (core) - Info: Loading plugin: snap compiz (core) - Info: Starting plugin: snap compiz (core) - Info: Loading plugin: resize compiz (core) - Info: Starting plugin: resize compiz (core) - Info: Loading plugin: gnomecompat compiz (core) - Info: Starting plugin: gnomecompat compiz (core) - Info: Loading plugin: move compiz (core) - Info: Starting plugin: move compiz (core) - Info: Loading plugin: place compiz (core) - Info: Starting plugin: place compiz (core) - Info: Loading plugin: mousepoll compiz (core) - Info: Starting plugin: mousepoll compiz (core) - Info: Loading plugin: regex compiz (core) - Info: Starting plugin: regex compiz (core) - Info: Loading plugin: imgpng compiz (core) - Info: Starting plugin: imgpng compiz (core) - Info: Loading plugin: vpswitch compiz (core) - Info: Starting plugin: vpswitch compiz (core) - Info: Loading plugin: grid compiz (core) - Info: Starting plugin: grid compiz (core) - Info: Loading plugin: animation compiz (core) - Info: Starting plugin: animation compiz (core) - Info: Loading plugin: expo compiz (core) - Info: Starting plugin: expo compiz (core) - Info: Loading plugin: session compiz (core) - Info: Starting plugin: session compiz (core) - Info: Loading plugin: wall compiz (core) - Info: Starting plugin: wall compiz (core) - Info: Loading plugin: fade compiz (core) - Info: Starting plugin: fade compiz (core) - Info: Loading plugin: unitymtgrabhandles compiz (core) - Info: Starting plugin: unitymtgrabhandles compiz (core) - Info: Loading plugin: ezoom compiz (core) - Info: Starting plugin: ezoom compiz (core) - Info: Loading plugin: workarounds compiz (core) - Info: Starting plugin: workarounds compiz (core) - Info: Loading plugin: scale compiz (core) - Info: Starting plugin: scale compiz (core) - Info: Loading plugin: unityshell compiz (core) - Info: Starting plugin: unityshell WARN 2014-06-03 10:55:31 unity.glib.dbus.server GLibDBusServer.cpp:586 Can't register object 'com.canonical.Autopilot.Introspection' yet as we don't have a connection, waiting for it... WARN 2014-06-03 10:55:31 unity.glib.dbus.server GLibDBusServer.cpp:586 Can't register object 'com.canonical.Unity.Debug.Logging' yet as we don't have a connection, waiting for it... compiz (unityshell) - Error: GL_ARB_vertex_buffer_object not supported compiz (core) - Error: Plugin initScreen failed: unityshell compiz (core) - Error: Failed to start plugin: unityshell compiz (core) - Info: Unloading plugin: unityshell X Error of failed request: BadWindow (invalid Window parameter) Major opcode of failed request: 18 (X_ChangeProperty) Resource id in failed request: 0x4000006 Serial number of failed request: 9909 Current serial number in output stream: 9913 It was all working fine yesterday but this morning there was nothing. Please help Many Thanks

    Read the article

  • How can i make changes to this file Encoding?

    - by SuperUserMan
    I have these 3 files 21/08/2014 07:15 PM 122 Tw2AWK.csv 21/08/2014 07:15 PM 125 Tw2Notepad.csv 21/08/2014 07:15 PM 119 Tw2REPL.csv C:\myfilesfile Tw2AWK.csv TwREPL.csv Tw2Notepad.csv Tw2AWK.csv; UTF-8 Unicode text, with CRLF line terminators Tw2REPL.csv; UTF-8 Unicode text Tw2Notepad.csv; UTF-8 Unicode (with BOM) text, with CRLF line terminators HEX of these files is as follows C:\myfilesxxd -p Tw2REPL.csv 0a222344656c686947616e675261706520776173206120736d616c6c2069 6e636964656e7420746f2023536d616c6c5261706973744a6169746c6579 20646e61696e6469612e636f6d2f696e6469612f7265706f72742d69e280 a6207069632e747769747465722e636f6d2f6762565070776637744f22 C:\myfilesxxd -p Tw2AWK.csv 0d0a222344656c686947616e675261706520776173206120736d616c6c20 696e636964656e7420746f2023536d616c6c5261706973744a6169746c65 7920646e61696e6469612e636f6d2f696e6469612f7265706f72742d69e2 80a6207069632e747769747465722e636f6d2f6762565070776637744f22 0d0a C:\myfilesxxd -p Tw2Notepad.csv efbbbf0d0a222344656c686947616e675261706520776173206120736d61 6c6c20696e636964656e7420746f2023536d616c6c5261706973744a6169 746c657920646e61696e6469612e636f6d2f696e6469612f7265706f7274 2d69e280a6207069632e747769747465722e636f6d2f6762565070776637 744f220d0a I want Tw2REPL.csv to look like Tw2Notepad.csv How can I do it? NOTE: I have do this all via command line (batch) . I can use any 3rd party standalone exe's though. I am on Windows XP Please help, its very important for me

    Read the article

  • Monitoring slow nginx/unicorn requests

    - by injekt
    I'm currently using Nginx to proxy requests to a Unicorn server running a Sinatra application. The application only has a couple of routes defined, those of which make fairly simple (non costly) queries to a PostgreSQL database, and finally return data in JSON format, these services are being monitored by God. I'm currently experiencing extremely slow response times from this application server. I have another two Unicorn servers being proxied via Nginx, and these are responding perfectly fine, so I think I can rule out any wrong doing from Nginx. Here is my God configuration: # God configuration APP_ROOT = File.expand_path '../', File.dirname(__FILE__) God.watch do |w| w.name = "app_name" w.interval = 30.seconds # default w.start = "cd #{APP_ROOT} && unicorn -c #{APP_ROOT}/config/unicorn.rb -D" # -QUIT = graceful shutdown, waits for workers to finish their current request before finishing w.stop = "kill -QUIT `cat #{APP_ROOT}/tmp/unicorn.pid`" w.restart = "kill -USR2 `cat #{APP_ROOT}/tmp/unicorn.pid`" w.start_grace = 10.seconds w.restart_grace = 10.seconds w.pid_file = "#{APP_ROOT}/tmp/unicorn.pid" # User under which to run the process w.uid = 'web' w.gid = 'web' # Cleanup the pid file (this is needed for processes running as a daemon) w.behavior(:clean_pid_file) # Conditions under which to start the process w.start_if do |start| start.condition(:process_running) do |c| c.interval = 5.seconds c.running = false end end # Conditions under which to restart the process w.restart_if do |restart| restart.condition(:memory_usage) do |c| c.above = 150.megabytes c.times = [3, 5] # 3 out of 5 intervals end restart.condition(:cpu_usage) do |c| c.above = 50.percent c.times = 5 end end w.lifecycle do |on| on.condition(:flapping) do |c| c.to_state = [:start, :restart] c.times = 5 c.within = 5.minute c.transition = :unmonitored c.retry_in = 10.minutes c.retry_times = 5 c.retry_within = 2.hours end end end Here is my Unicorn configuration: # Unicorn configuration file APP_ROOT = File.expand_path '../', File.dirname(__FILE__) worker_processes 8 preload_app true pid "#{APP_ROOT}/tmp/unicorn.pid" listen 8001 stderr_path "#{APP_ROOT}/log/unicorn.stderr.log" stdout_path "#{APP_ROOT}/log/unicorn.stdout.log" before_fork do |server, worker| old_pid = "#{APP_ROOT}/tmp/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # someone else did our job for us end end end I have checked God status logs but it appears CPU and Memory Usage are never out of bounds. I also have something to kill high memory workers, which can be found on the GitHub blog page here. When running a tail -f on the Unicorn logs I see some requests, but they're far and few between, when I was at around 60-100 a second before this trouble seemed to have arrived. This log also shows workers being reaped and started as expected. So my question is, how would I go about debugging this? What are the next steps I should be taking? I'm extremely baffled that the server will sometimes respond quickly, but at others time it's very slow, for long periods of time (which may or may not be peak traffic times). Any advice is much appreciated.

    Read the article

  • Trying to configure domain-based access via htaccess file.

    - by kenja
    I've created an account with no-ip.com that registers my ip with a subdomain of their service. When I do an nslookup, I see that the service is working and that my domain is being shown. Now I want to provide access to that subdomain on the admin site of our server which is protected by htaccess IP restrictions. When I try to add the new domain to my script it does not work. Am I doing something wrong? I'm basically trying to make my laptop so it can log in from no matter when I'm at while still preventing all other IPs from accessing the site. ## password begin ## AuthName "Restricted Access" AuthUserFile /usr/www/users/site/.passwd AuthType Basic Require valid-user Order deny,allow Deny from all Allow from 69.1.122.161 mysubdomain.no-ip.org Satisfy All

    Read the article

  • Fixing a SkyDrive Sync Disaster

    - by Rick Strahl
    For a few months I've been using SkyDrive to handle some basic synching tasks for a number of folders of mine. Specifically I've been dumping a few of my development folders into sky drive so I have a live running backup. It had been working just fine until about a week ago when something went awry. Badly! The idea is that the SkyDrive should sync files, but somewhere in its sync relationship it appears that SkyDrive got confused and assumed it needed to sync back older files to my local machine from the SkyDrive server. So rather than syncing my newer files to the server SkyDrive was pushing older files back to me. Because SkyDrive is so slow actually updating data it's not unusual for SkyDrive to be far behind in syncing and apparently some files were out of date by several months. Of course this is insidious because I didn't notice it for quite some time. I'd been happily working away on my files when a few days ago I noted a bunch of files with -RasXps (my machine name) popping up in various folders. At first I thought my Git repository was giving me a fit, but eventually realized that SkyDrive was actually pushing old files into my monitored folders. To be fair SkyDrive did make backups of the existing files, but by the time I caught it there were literally a few thousand files scattered on my machine that were now updated with old files from online. Here's what some of this looks like: If you look at the directory list you see a bunch of files with a -RasXps postfix appended to them. Those are the files that SkyDrive replaced and backed up on my machine. As you can see the backed up files are actually newer than the ones it pulled from the online SkyDrive. Unless I modified the files after they were updated they all were older than the existing local files. Not exactly how I imagined my synching would work. At first I started cleaning up this mess manually. In most cases the obvious solution was to simply delete the original file and replace with the -RasXps file, but not in all files. Some scrutiny was required and besides being a pain in the ass to rename files, quite frequently I had to dig out Beyond Compare to compare a few files where it wasn't quite clear what's wrong. I quickly realized that doing this by hand would be too hard for the large number of files that got hosed. Hacking together a small .NET Utility So, I figured the easiest way to tackle this is to write a small utility app that shows me all the mangled files that have backups, allows me to compare them and then quickly select and update them, removing the -RasXps file after choosing one of the two files. What I ended up with was a quick and dirty WinForms app that allows me to pick a root folder, and then shows all the -MachineName files: I start by picking a base folder and a template to search for - typically the -MachineName. Clicking Go brings up a list of all files in that folder and its subdirectories.  The list also displays the dates for the saved (-MachineName) file and the current file on disk, along with highlighting for the newer of the two. I can right click on any file and get a context menu pop up to open the folder in Explorer, or open Beyond Compare and view the two files to compare differences which I found very helpful for a number of files where I had modified the files after SkyDrive had updated to an old one. Typically these would be the green files (of which there were thankfully few). To 'fix' files I can select any number of files in the list, then use one of the three buttons on the right to apply an operation. I can use the Saved files - that is the backup file that SkyDrive created with the -MachineName extension (-RasXps above). Or I can use the current file, which is the file with the right name on disk right now and delete the -MachineName file. Or on some occasions I can just opt to delete both of them. For some files like binaries it's often easier to just delete and them be rebuild than choosing. For the most part the process involves accepting the pink files, and checking the few green files and see if any modifications were made since the file was updated incorrectly by SkyDrive. For me luckily those are few in number. Anyways, I thought I share this utility in case anybody else runs into this issue. I've included the VS2012 solution and all the source code so you can see how it works and you can tweak it as needed. The .NET 4.5 binaries are also included if you can't compile. Be warned though!  This rough code is provided as is and makes no guarantees or claims about file safety. All three of the action buttons on the form will delete data. It's a very rough utility and there are no safeguards that ask nicely before deleting files. I highly recommend you make a backup before you have at it. This tools is very narrow in focus, but it might also work with other sync issues from other vendors. I seem to remember that I had similar issues with SugarSync at some point and it too created the -MachineName style files on sync conflicts. Hope this helps somebody out so you can avoid wasting the better part of a full work day on this… Resources Download the Source Code and Binaries for SkyDrive Rescue© Rick Strahl, West Wind Technologies, 2005-2013Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Do I have to do anything for file ACLs to work both before and after I reformat a computer?

    - by Zian Choy
    Let's say that I have a computer running Windows Vista with 2 users: Alice and Bob. Alice is the admin and Bob is a normal user. They each have files in their respective My Documents folders and Bob is not allowed to view Alice's files and Alice has to jump through a UAC elevation to view Bob's. If Alice copies all the files on the computer to an external NTFS-formatted hard drive with the following 2 commands: robocopy "E:\Bob's Files" "C:\Users\Bob\My Documents" /MIR robocopy "E:\Alice's Files" "C:\Users\Alice\My Documents" /MIR And then reformats the hard drive, installs a fresh copy of Windows, and creates 2 users named Alice and Bob on the computer, then will everything in the first paragraph be true after Alice copies the files back onto the internal hard drive? Assume that when the files are copied back over, she logs in as Bob and then copies Bob's files and likewise with her own files. Possibly relevant: Alice and Bob also have passwords on their user accounts and they create new passwords after the computer is reformatted. The main post has been tweaked slightly to make the question clearer. Answers that predate April 2011 are referring to an earlier version of this post.

    Read the article

  • How can I automatically restart Apache and Varnish if can't fetch a file?

    - by Tyler
    I need to restart Apache and Varnish and email some logs when the script can't fetch robots.txt but I am getting an error ./healthcheck: 43 [[: not found My server is Ubuntu 12.04 64-bit #!/bin/sh # Check if can fetch robots.txt if not then restart Apache and Varnish # Send last few lines of logs with date via email PATH=/bin:/usr/bin THEDIR=/tmp/web-server-health [email protected] mkdir -p $THEDIR if ( wget --timeout=30 -q -P $THEDIR http://website.com/robots.txt ) then # we are up touch ~/.apache-was-up else # down! but if it was down already, don't keep spamming if [[ -f ~/.apache-was-up ]] then # write a nice e-mail echo -n "Web server down at " > $THEDIR/mail date >> $THEDIR/mail echo >> $THEDIR/mail echo "Apache Log:" >> $THEDIR/mail tail -n 30 /var/log/apache2/error.log >> $THEDIR/mail echo >> $THEDIR/mail echo "AUTH Log:" >> $THEDIR/mail tail -n 30 /var/log/auth.log >> $THEDIR/mail echo >> $THEDIR/mail # kick apache echo "Now kicking apache..." >> $THEDIR/mail /etc/init.d/varnish stop >> $THEDIR/mail 2>&1 killall -9 varnishd >> $THEDIR/mail 2>&1 /etc/init.d/varnish start >> $THEDIR/mail 2>&1 /etc/init.d/apache2 stop >> $THEDIR/mail 2>&1 killall -9 apache2 >> $THEDIR/mail 2>&1 /etc/init.d/apache2 start >> $THEDIR/mail 2>&1 # prepare the mail echo >> $THEDIR/mail echo "Good luck troubleshooting!" >> $THEDIR/mail # send the mail sendemail -o message-content-type=html -f [email protected] -t $EMAIL -u ALARM -m < $THEDIR/mail rm ~/.apache-was-up fi fi rm -rf $THEDIR

    Read the article

  • Music before bells and whistles

    - by Tony Davis
    Why is it that Windows has so much difficulty in finding content on its file system? This is not an insurmountable technical problem; on my laptop, I have a database within which I can instantly find text or names within millions of records, within 300 milliseconds. I have a copy of Google Desktop that can find phrases within emails or documents, almost as quickly. It is an important, though mundane, part of an operating system to be able to find files. The first thing I notice within Windows is that the facility to find files or text within files is called 'search' rather than 'find'. Hmm. This doesn’t bode well. What’s this? It does a brute-force search for file names? Here we are in an age when we can breed mice that glow in the dark, and manufacture computers that fit in our shirt pockets, and we find an operating system that is still entirely innocent of managing and indexing content in hierarchical data. I can actually read the files of my PC into a database, mimic the directory/folder hierarchies and then find files in a flash; but when I do the same with Windows Vista, we are suddenly back in a 1960s time warp. Finding files based on their name is bad enough, but finding files based on the content that they contain is more or less asking for an opportunity to wait 20 minutes in order to see a "file not found" message. Sadly, with Windows 7, Microsoft seems to have fallen into the familiar trap of adding bells and whistles before finishing the song. It's certainly true that Microsoft has added new features and a certain polish to Windows Search 4.0, the latest incarnation. It works more like a web search and offers a new search syntax, called Advanced Query Syntax, which allows you to search on file author, file size, date ranges (e.g. date:=7/4/09still does not work reliably. I've experienced first-hand its stubborn refusal, despite a full index, to acknowledge the existence of a file I know exists, based on a search for a specific term within that file that I know is in there somewhere; a file that Google Desktop search, or old wingrep, finds in seconds. When users hark back to the halcyon days of Windows XP search, you know something is seriously amiss. Shouldn't applications get the functionality right before applying animated menus and Teletubby graphics, or is advancing age making me grumpy? I’d be pleased to hear your views, as always. Cheers, Tony.

    Read the article

  • How do you install Firefox plugins from a file?

    - by Ethan
    I would like to install this SQLite Manager plug-in for Firefox on Mac OS X. However, the page advertising the plug-in does not offer the customary "get addon" link (or whatever it's called). There is only the possibility of downloading files. Such as: SQLiteManager_0.5.0b5.xpi How do you use that to install the plug-in?

    Read the article

< Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >