Search Results

Search found 15558 results on 623 pages for 'basic authentication'.

Page 497/623 | < Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >

  • Apache directory access with virtual host [SOLVED]

    - by alexeygaidamaka
    I have a virtual host with a configuration like that. When i'm trying to get into foobar.com/dir providing valid username/password pair i get 403 forbidden page instead of that directory contents. www.foobar.com/dir has 777 rights, .httpaswd is chmoded 644. But i can't figure out why i am still not seeing contents. Please, give me a hint. ServerAdmin webmaster@localhost ServerName www.foobar.com ServerAlias www.foobar.com DocumentRoot /var/www/foobar <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/foobar> Options -Indexes FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> <Directory /var/www/foobar/dir> AllowOverride AuthConfig AuthName "Authorize yourself, please!" AuthType Basic AuthUserFile /etc/apache2/.htpasswd AuthGroupFile /dev/null Allow from All Order Allow,Deny Options +Indexes<<- that one should be added Require valid-user you have to add the line Options +Indexes to see directory contents

    Read the article

  • How can I tell System Restore in WIndows 7 recovery console to use my recovered backup drive's restore point data?

    - by Rich Shealer
    My Windows 7 desktop PC failed to boot. It would get to a grayish screen with a mouse and would only respond to the power button. After much examination I found that the problem was not a failed drive as running CHKDSK from the Recovery Console on my main drives passed without any errors. I had been installing various Java version in the days before the failure so I decided to use a restore point to roll backwards. I have an external SATA drive controller with two 2 TB drives mirrored using the Windows mirroring function. My system has been backing up to this drive regularly. The problem is I accidently broke the mirror when testing to see if this drive system might have been causing my boot issue. Connecting it to another machine showed two dynamic drives that were invalid. In the end I reformatted one as an NTFS basic disc and used recovery software on the other to copy all of the files to the reformatted drive. I had to copy the restore points into the new drive's System Volume Information folder by granting rights to that user. I moved the drive back to the original machine and rebooted. I can see my new drive, it even uses the same drive letter as it did in normal mode. Running System Restore it lists a new Automatic Restore point created while sitting at the RC along with all of my backups. Selecting the backup I want (or any other) I get a dialog. The backup drive could not be found. System Restore is looking for restore points on your backup. Make sure the backup drive is on and connected to this computer and then click OK. What do I need to do to allow system restore to see the restore points?

    Read the article

  • CNAME vs A records

    - by deb
    I built a small rails app that allow users to make a simple site. It uses subdomain accounts ex: deb.myapp.com Whenever an user wanted to have a domain name associated with their site, they would change their NS records to point to slicehost where the application is hosted and I would manage the DNS records myself. However, as more people are using the application this is not an option for me anymore. I prefer users to keep their nameservers at goddady, register.com, etc, so they can log in and manage their own MX records or whatever else they need to change. My question is, should I have them change the A records to point to my server's ip, or should I have them create a CNAME record? Do they need to delete the default A records to allow the CNAME record to work? Will the A record take precedence and overrule the CNAME record? Thanks in advance. Sorry if this is a very basic question. I've read other posts and I can't find a definite answer.

    Read the article

  • allow public access to subfolder of protected folder on apache

    - by UnnamedMook
    I have password-protected the root folder of my website while i do maintenance, but I want to display a custom 401 error page to let people know the site is under construction. Unfortunately, my web host doesn't allow me write access to anything outside the root folder of my website, so this custom error page must by stored in the root folder or one of its subfolders. Instead of my custom error page I get the Apache default error page and it also says "Additionally, a 401 Authorization Required error was encountered while trying to use an ErrorDocument to handle the request." I searched for ways to make a subfolder of a protected directory public, and all I could find was to use the "Satisfy any" directive, but this doesn't work for me. It doesn't work on a file-only basis either, as with the .htaccess file below. #Authorization Restriction AuthType Basic AuthName "Access to root" AuthUserFile ********************************* Require user *********** Order Allow,Deny Satisfy any #Error Documents ErrorDocument 401 Error-401.html #Allow access to error documents <Files Error-*,html> Order Deny,Allow Allow from all Satisfy any </Files> I can only use .htaccess files; I don't have access to httpd.conf

    Read the article

  • How can I change the file name of the Program Files (x86) folder?

    - by Madrauk
    Let me stop you before the "you shouldn't do that" and "you will corrupt your file paths". I know what I'm doing, but I'll give you the story to convince you. Basically, my hard drive is failing to the point where programs installed on it are not responding consistently. So, in preparation for my replacement, I'm moving as many files as possible to my secondary because the new drive will be smaller (it's an emergency I can't buy a fancy expensive new drive) and so I can actually use my computer until the new drive comes in. The basic idea behind what I'm doing is I've copied the contents of the Program Files x86 folder to another spot on my other drive, and I want to replace the original folder in the C drive with a symbolic link that points to the other drive, so the programs can run from that drive and be fine but it will save space on my C drive and simplify the moving process when the new one comes in. To do that, I need to rename the program files folder, make the symbolic link, hopefully delete the program files folder, then restart my computer so all the programs are running properly and I can confirm it works. Now that I've told you why, can anyone help me out?

    Read the article

  • Slow Routing Over LAN (Wired)

    - by reverendj1
    I'm having issues with my router acting very slow (Adtran Netvanta 3458). We have two networks, let's call them A and B. When I run netperf from two servers on network A (no routing) I get speeds along the lines of 900 Mbps. Which makes sense, since we have all 1Gbps switches. When testing A to B (or vice-versa) I get speeds along the lines of 22Mbps. I have also tested connecting my laptop to the switchports on the router, and testing two servers on network A (no routing) and got speeds around 90 Mbps. Which makes sense since the switchports on the router are 100Mbps. Does anyone have any idea why routing would be so slow? We bought the router over a year ago, and we think it has been doing this since then, but we never actually tested it before. (network B isn't really used much, so we didn't notice much) We were implementing a site-to-site VPN and noticed it was ridiculously slow, so we started testing basic routing performance. I have ruled out cabling and router CPU/memory utilization. Adtran looked at my config, but didn't see anything wrong with it.

    Read the article

  • Display on secondary video card (Nvidia 8400 GS): horrible refresh, bogs system

    - by minameismud
    This is my work computer, but it's a small shop. We do business software development. The most hardcore thing we create is some web animations with html5 and fancy javascript/css. The base machine is a Dell Precision T3500 - Xeon W3550 (3.07GHz quad), 6GB ram, pair of 500GB harddrives, and Win 7 x64 Enterprise SP1. My primary video card is an ATI FirePro V4800 1GB in a PCIe slot of some speed driving a pair of 23s at 1920x1080 through DisplayPort-HDMI adapters. The secondary card is an NVidia GeForce 8400GS in a PCI slot driving a single 17" at 1280x1024 through DVI. On either of the 23" monitors, windows move smoothly, scroll quickly, and are generally very responsive. On the 17", it's slow, chunky, and when I'm trying to scroll a ton of content, Windows will occasionally suggest I drop to the Windows Basic theme. I've updated drivers for both cards, and I've gotten every Windows update relating to video. Specifically: ATI FirePro Provider: Advanced Micro Devices, Inc Date: 6/22/2014 Version: 13.352.1014.0 NVidia 8400 GS Provider: NVIDIA Date: 7/2/2014 Version: 9.18.13.4052 Unfortunately, new hardware isn't really an option. Is there anything I can do software-wise to speed up the NVidia-driven monitor?

    Read the article

  • Raspberry Pi how to format HDD

    - by Speed
    Hi I am very new to Raspberry Pi environment, so looking for a bit of help to format a usb hard disk drive. I ran lsblk and got sda 8:0 0 37.3G 0 disk sda1 8:1 0 37.3G 0 part looking on web, if tried the following "sudo mkfs.ext4 /dev/sda1 -L USB40gb" it did something but when I tried to mount the drive again, it still showed the files that were there before and I can not create new file/folder "Error creating directory: Permission denied" I am writing this from my windows 8.1 pc so can not cut and paste from the pi. trying to format its output is a bit hard. Oh, there is Nothing written after the word "part" above. There use to be /media/USB40gb so I have done something because this has disappeared. I am using PCManFM 0.9.10 It does not have a format option, which would make life a lot easier, but then its not windows. I think I am running the basic linux os for the pi. It boots to a graphic environment, but I do not know how to advise what it is. I think its OpenBox 2.0.4 Thanks in advance Speed PS: I reran the format string above but this time I changed the label to read USB37gb. I did this to confirm that I was in fact formatting the right drive. Low and behold, it actually formatted the drive, wiping everything from it. Great ... testing it by creating a new folder on the drive and get error msg Permission Denied! So I have fixed the formatting issue by trial and error but still can't use the drive... Suggestions anyone?

    Read the article

  • Reasons for Ajax navigation breaking on Coldfusion/Apache when running an app in iOS fullscreen mode? [closed]

    - by frequent
    Not sure if this belongs to SO or here. I'm running a webApp using jquery, jquerymobile, requireJS and apache, coldfusion8, mysql 5.0.88 serverside. The app works fine until I try to run it in fullscreen mode on iOS (add icon to homescreen, launch app from there with <meta name="apple-mobile-web-app-capable" content="yes" /> specified). This meta tag will break the Jquery Mobile AJAX navigation. The AJAX request will fail and the requested page will be loaded as a new page, thereby restarting the app on every page change. I have chased this through the whole front end starting from requireJS through Jquery Mobiles AJAX navigation down to the AJAX request being made in Jquery. xhr.send( ( s.hasContent && s.data ) || null ); In regular browser this works no problem. In fullscreen mode, this fails (readystate=0, empty response). I have found this article, which argues that fullscreen mode is like a browser instance with different HTTP strings. On ASP.net this results in the browser not being identified by the server and only basic browser settings being assumed (e.g. no Javascript). I'm a little lost where to start looking for possible reasons serverside. I have not written any server code for handling Ajax page navigation, so this must be something that is handled out of the box by Coldfusion or Apache? Question: Where could I start looking for problable causes of fullscreen mode breaking AJAX navigation if I assume Coldfusion or Apache are the culprits? Is there a setting I'm missing in httpd.config? What else could be the problem? Thanks for inputs!

    Read the article

  • Allow more websocket connections

    - by Switz
    I want to load balance my node.js (DerbyJS to be specific) application on a basic Linode (512MB ram). It can probably take more than one process running at once. The querys/database does not concern me as I'm not doing anything intensive. The problem at the moment is that it can only handle up to ~40 websocket connections at once. I would love if I could get that number in the few hundred+ range. I anticipate a lot of traffic on launch due to the fact that it's a highly niche community with an engaged audience, but after it should be fine with just ~20-40 connections at once, which it handles perfectly as of now. I don't mind spending a bit of money for a week or two worth of running, but I also don't want to switch production environments. How can I test the process to see how many instances I am able to run on the box? Will increasing the number of processes increase the amount of websockets I can handle, or is that a limitation of the server's network? I have an old Macbook Pro running Linux sitting next to me that has 2GB ram and a 2.8 Dual Core Processor. Could I use this to handle some of the extra load? I could probably load balance with nginx to its IP. I'm on a FiOS home network. If you have any suggestions, I'd really appreciate it. Thanks

    Read the article

  • Provide a user with service start/stop permissions

    - by slakr007
    I have a very basic domain that I use for development. I want to create a GPO that provides users in the Backup Operators group with start/stop permissions for two specific services on a specific server. I have read several articles about this, and they all indicate that this is very easy. Create a GPO, give the user start/stop permissions to the services under Computer Configuration Policies Windows Settings Security Settings System Services, and voila. Done. Not so much, but I have to be doing something wrong. My install is pretty much the default. The domain controller is in the Domain Controllers OU, the Backup Operators group is under Builtin, and I created a user called Backup under Users. I created a GPO and linked it to the Domain Controllers OU. In the GPO I give the Backup user permission to start/stop two specific services on the server. I forced an update with gpupdate. I used Group Policy Results to verify that my GPO is the winning GPO giving the user the permission to start/stop the two services. However, the user is still unable to start/stop the services. I attempted different loopback settings on the GPO to no avail. I'm sort of at a loss here.

    Read the article

  • Project Euler #15

    - by Aistina
    Hey everyone, Last night I was trying to solve challenge #15 from Project Euler: Starting in the top left corner of a 2×2 grid, there are 6 routes (without backtracking) to the bottom right corner. How many routes are there through a 20×20 grid? I figured this shouldn't be so hard, so I wrote a basic recursive function: const int gridSize = 20; // call with progress(0, 0) static int progress(int x, int y) { int i = 0; if (x < gridSize) i += progress(x + 1, y); if (y < gridSize) i += progress(x, y + 1); if (x == gridSize && y == gridSize) return 1; return i; } I verified that it worked for a smaller grids such as 2×2 or 3×3, and then set it to run for a 20×20 grid. Imagine my surprise when, 5 hours later, the program was still happily crunching the numbers, and only about 80% done (based on examining its current position/route in the grid). Clearly I'm going about this the wrong way. How would you solve this problem? I'm thinking it should be solved using an equation rather than a method like mine, but that's unfortunately not a strong side of mine. Update: I now have a working version. Basically it caches results obtained before when a n×m block still remains to be traversed. Here is the code along with some comments: // the size of our grid static int gridSize = 20; // the amount of paths available for a "NxM" block, e.g. "2x2" => 4 static Dictionary<string, long> pathsByBlock = new Dictionary<string, long>(); // calculate the surface of the block to the finish line static long calcsurface(long x, long y) { return (gridSize - x) * (gridSize - y); } // call using progress (0, 0) static long progress(long x, long y) { // first calculate the surface of the block remaining long surface = calcsurface(x, y); long i = 0; // zero surface means only 1 path remains // (we either go only right, or only down) if (surface == 0) return 1; // create a textual representation of the remaining // block, for use in the dictionary string block = (gridSize - x) + "x" + (gridSize - y); // if a same block has not been processed before if (!pathsByBlock.ContainsKey(block)) { // calculate it in the right direction if (x < gridSize) i += progress(x + 1, y); // and in the down direction if (y < gridSize) i += progress(x, y + 1); // and cache the result! pathsByBlock[block] = i; } // self-explanatory :) return pathsByBlock[block]; } Calling it 20 times, for grids with size 1×1 through 20×20 produces the following output: There are 2 paths in a 1 sized grid 0,0110006 seconds There are 6 paths in a 2 sized grid 0,0030002 seconds There are 20 paths in a 3 sized grid 0 seconds There are 70 paths in a 4 sized grid 0 seconds There are 252 paths in a 5 sized grid 0 seconds There are 924 paths in a 6 sized grid 0 seconds There are 3432 paths in a 7 sized grid 0 seconds There are 12870 paths in a 8 sized grid 0,001 seconds There are 48620 paths in a 9 sized grid 0,0010001 seconds There are 184756 paths in a 10 sized grid 0,001 seconds There are 705432 paths in a 11 sized grid 0 seconds There are 2704156 paths in a 12 sized grid 0 seconds There are 10400600 paths in a 13 sized grid 0,001 seconds There are 40116600 paths in a 14 sized grid 0 seconds There are 155117520 paths in a 15 sized grid 0 seconds There are 601080390 paths in a 16 sized grid 0,0010001 seconds There are 2333606220 paths in a 17 sized grid 0,001 seconds There are 9075135300 paths in a 18 sized grid 0,001 seconds There are 35345263800 paths in a 19 sized grid 0,001 seconds There are 137846528820 paths in a 20 sized grid 0,0010001 seconds 0,0390022 seconds in total I'm accepting danben's answer, because his helped me find this solution the most. But upvotes also to Tim Goodman and Agos :) Bonus update: After reading Eric Lippert's answer, I took another look and rewrote it somewhat. The basic idea is still the same but the caching part has been taken out and put in a separate function, like in Eric's example. The result is some much more elegant looking code. // the size of our grid const int gridSize = 20; // magic. static Func<A1, A2, R> Memoize<A1, A2, R>(this Func<A1, A2, R> f) { // Return a function which is f with caching. var dictionary = new Dictionary<string, R>(); return (A1 a1, A2 a2) => { R r; string key = a1 + "x" + a2; if (!dictionary.TryGetValue(key, out r)) { // not in cache yet r = f(a1, a2); dictionary.Add(key, r); } return r; }; } // calculate the surface of the block to the finish line static long calcsurface(long x, long y) { return (gridSize - x) * (gridSize - y); } // call using progress (0, 0) static Func<long, long, long> progress = ((Func<long, long, long>)((long x, long y) => { // first calculate the surface of the block remaining long surface = calcsurface(x, y); long i = 0; // zero surface means only 1 path remains // (we either go only right, or only down) if (surface == 0) return 1; // calculate it in the right direction if (x < gridSize) i += progress(x + 1, y); // and in the down direction if (y < gridSize) i += progress(x, y + 1); // self-explanatory :) return i; })).Memoize(); By the way, I couldn't think of a better way to use the two arguments as a key for the dictionary. I googled around a bit, and it seems this is a common solution. Oh well.

    Read the article

  • Solving embarassingly parallel problems using Python multiprocessing

    - by gotgenes
    How does one use multiprocessing to tackle embarrassingly parallel problems? Embarassingly parallel problems typically consist of three basic parts: Read input data (from a file, database, tcp connection, etc.). Run calculations on the input data, where each calculation is independent of any other calculation. Write results of calculations (to a file, database, tcp connection, etc.). We can parallelize the program in two dimensions: Part 2 can run on multiple cores, since each calculation is independent; order of processing doesn't matter. Each part can run independently. Part 1 can place data on an input queue, part 2 can pull data off the input queue and put results onto an output queue, and part 3 can pull results off the output queue and write them out. This seems a most basic pattern in concurrent programming, but I am still lost in trying to solve it, so let's write a canonical example to illustrate how this is done using multiprocessing. Here is the example problem: Given a CSV file with rows of integers as input, compute their sums. Separate the problem into three parts, which can all run in parallel: Process the input file into raw data (lists/iterables of integers) Calculate the sums of the data, in parallel Output the sums Below is traditional, single-process bound Python program which solves these three tasks: #!/usr/bin/env python # -*- coding: UTF-8 -*- # basicsums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file. """ import csv import optparse import sys def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) return cli_parser def parse_input_csv(csvfile): """Parses the input CSV and yields tuples with the index of the row as the first element, and the integers of the row as the second element. The index is zero-index based. :Parameters: - `csvfile`: a `csv.reader` instance """ for i, row in enumerate(csvfile): row = [int(entry) for entry in row] yield i, row def sum_rows(rows): """Yields a tuple with the index of each input list of integers as the first element, and the sum of the list of integers as the second element. The index is zero-index based. :Parameters: - `rows`: an iterable of tuples, with the index of the original row as the first element, and a list of integers as the second element """ for i, row in rows: yield i, sum(row) def write_results(csvfile, results): """Writes a series of results to an outfile, where the first column is the index of the original row of data, and the second column is the result of the calculation. The index is zero-index based. :Parameters: - `csvfile`: a `csv.writer` instance to which to write results - `results`: an iterable of tuples, with the index (zero-based) of the original row as the first element, and the calculated result from that row as the second element """ for result_row in results: csvfile.writerow(result_row) def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # gets an iterable of rows that's not yet evaluated input_rows = parse_input_csv(in_csvfile) # sends the rows iterable to sum_rows() for results iterable, but # still not evaluated result_rows = sum_rows(input_rows) # finally evaluation takes place as a chain in write_results() write_results(out_csvfile, result_rows) infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) Let's take this program and rewrite it to use multiprocessing to parallelize the three parts outlined above. Below is a skeleton of this new, parallelized program, that needs to be fleshed out to address the parts in the comments: #!/usr/bin/env python # -*- coding: UTF-8 -*- # multiproc_sums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file, using multiple processes if desired. """ import csv import multiprocessing import optparse import sys NUM_PROCS = multiprocessing.cpu_count() def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) cli_parser.add_option('-n', '--numprocs', type='int', default=NUM_PROCS, help="Number of processes to launch [DEFAULT: %default]") return cli_parser def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # Parse the input file and add the parsed data to a queue for # processing, possibly chunking to decrease communication between # processes. # Process the parsed data as soon as any (chunks) appear on the # queue, using as many processes as allotted by the user # (opts.numprocs); place results on a queue for output. # # Terminate processes when the parser stops putting data in the # input queue. # Write the results to disk as soon as they appear on the output # queue. # Ensure all child processes have terminated. # Clean up files. infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) These pieces of code, as well as another piece of code that can generate example CSV files for testing purposes, can be found on github. I would appreciate any insight here as to how you concurrency gurus would approach this problem. Here are some questions I had when thinking about this problem. Bonus points for addressing any/all: Should I have child processes for reading in the data and placing it into the queue, or can the main process do this without blocking until all input is read? Likewise, should I have a child process for writing the results out from the processed queue, or can the main process do this without having to wait for all the results? Should I use a processes pool for the sum operations? If yes, what method do I call on the pool to get it to start processing the results coming into the input queue, without blocking the input and output processes, too? apply_async()? map_async()? imap()? imap_unordered()? Suppose we didn't need to siphon off the input and output queues as data entered them, but could wait until all input was parsed and all results were calculated (e.g., because we know all the input and output will fit in system memory). Should we change the algorithm in any way (e.g., not run any processes concurrently with I/O)?

    Read the article

  • Ideas for multiplatform encrypted java mobile storage system

    - by Fernando Miguélez
    Objective I am currently designing the API for a multiplatform storage system that would offer same interface and capabilities accross following supported mobile Java Platforms: J2ME. Minimum configuration/profile CLDC 1.1/MIDP 2.0 with support for some necessary JSRs (JSR-75 for file storage). Android. No minimum platform version decided yet, but rather likely could be API level 7. Blackberry. It would use the same base source of J2ME but taking advantage of some advaced capabilities of the platform. No minimum configuration decided yet (maybe 4.6 because of 64 KB limitation for RMS on 4.5). Basically the API would sport three kind of stores: Files. These would allow standard directory/file manipulation (read/write through streams, create, mkdir, etc.). Preferences. It is a special store that handles properties accessed through keys (Similar to plain old java properties file but supporting some improvements such as different value data types such as SharedPreferences on Android platform) Local Message Queues. This store would offer basic message queue functionality. Considerations Inspired on JSR-75, all types of stores would be accessed in an uniform way by means of an URL following RFC 1738 conventions, but with custom defined prefixes (i.e. "file://" for files, "prefs://" for preferences or "queue://" for message queues). The address would refer to a virtual location that would be mapped to a physical storage object by each mobile platform implementation. Only files would allow hierarchical storage (folders) and access to external extorage memory cards (by means of a unit name, the same way as in JSR-75, but that would not change regardless of underlying platform). The other types would only support flat storage. The system should also support a secure version of all basic types. The user would indicate it by prefixing "s" to the URL (i.e. "sfile://" instead of "file://"). The API would only require one PIN (introduced only once) to access any kind of secure object types. Implementation issues For the implementation of both plaintext and encrypted stores, I would use the functionality available on the underlying platforms: Files. These are available on all platforms (J2ME only with JSR-75, but it is mandatory for our needs). The abstract File to actual File mapping is straight except for addressing issues. RMS. This type of store available on J2ME (and Blackberry) platforms is convenient for Preferences and maybe Message Queues (though depending on performance or size requirements these could be implemented by means of normal files). SharedPreferences. This type of storage, only available on Android, would match Preferences needs. SQLite databases. This could be used for message queues on Android (and maybe Blackberry). When it comes to encryption some requirements should be met: To ease the implementation it will be carried out on read/write operations basis on streams (for files), RMS Records, SharedPreferences key-value pairs, SQLite database columns. Every underlying storage object should use the same encryption key. Handling of encrypted stores should be the same as the unencrypted counterpart. The only difference (from the user point of view) accessing an encrypted store would be the addressing. The user PIN provides access to any secure storage object, but the change of it would not require to decrypt/re-encrypt all the encrypted data. Cryptographic capabilities of underlying platform should be used whenever it is possible, so we would use: J2ME: SATSA-CRYPTO if it is available (not mandatory) or lightweight BoncyCastle cryptographic framework for J2ME. Blackberry: RIM Cryptographic API or BouncyCastle Android: JCE with integraced cryptographic provider (BouncyCastle?) Doubts Having reached this point I was struck by some doubts about what solution would be more convenient, taking into account the limitation of the plataforms. These are some of my doubts: Encryption Algorithm for data. Would AES-128 be strong and fast enough? What alternatives for such scenario would you suggest? Encryption Mode. I have read about the weakness of ECB encryption versus CBC, but in this case the first would have the advantage of random access to blocks, which is interesting for seek functionality on files. What type of encryption mode would you choose instead? Is stream encryption suitable for this case? Key generation. There could be one key generated for each storage object (file, RMS RecordStore, etc.) or just use one for all the objects of the same type. The first seems "safer", though it would require some extra space on device. In your opinion what would the trade-offs of each? Key storage. For this case using a standard JKS (or PKCS#12) KeyStore file could be suited to store encryption keys, but I could also define a smaller structure (encryption-transformation / key data / checksum) that could be attached to each storage store (i.e. using addition files with the same name and special extension for plain files or embedded inside other types of objects such as RMS Record Stores). What approach would you prefer? And when it comes to using a standard KeyStore with multiple-key generation (given this is your preference), would it be better to use a record-store per storage object or just a global KeyStore keeping all keys (i.e. using the URL identifier of abstract storage object as alias)? Master key. The use of a master key seems obvious. This key should be protected by user PIN (introduced only once) and would allow access to the rest of encryption keys (they would be encrypted by means of this master key). Changing the PIN would only require to reencrypt this key and not all the encrypted data. Where would you keep it taking into account that if this got lost all data would be no further accesible? What further considerations should I take into account? Platform cryptography support. Do SATSA-CRYPTO-enabled J2ME phones really take advantage of some dedicated hardware acceleration (or other advantage I have not foreseen) and would this approach be prefered (whenever possible) over just BouncyCastle implementation? For the same reason is RIM Cryptographic API worth the license cost over BouncyCastle? Any comments, critics, further considerations or different approaches are welcome.

    Read the article

  • Custom language - FOR loop in a clojure interpeter?

    - by Mark
    I have a basic interpreter in clojure. Now i need to implement for (initialisation; finish-test; loop-update) { statements } Implement a similar for-loop for the interpreted language. The pattern will be: (for variable-declarations end-test loop-update do statement) The variable-declarations will set up initial values for variables.The end-test returns a boolean, and the loop will end if end-test returns false. The statement is interpreted followed by the loop-update for each pass of the loop. Examples of use are: (run ’(for ((i 0)) (< i 10) (set i (+ 1 i)) do (println i))) (run ’(for ((i 0) (j 0)) (< i 10) (seq (set i (+ 1 i)) (set j (+ j (* 2 i)))) do (println j))) inside my interpreter. I will attach my interpreter code I got so far. Any help is appreciated. Interpreter (declare interpret make-env) ;; needed as language terms call out to 'interpret' (def do-trace false) ;; change to 'true' to show calls to 'interpret' ;; simple utilities (def third ; return third item in a list (fn [a-list] (second (rest a-list)))) (def fourth ; return fourth item in a list (fn [a-list] (third (rest a-list)))) (def run ; make it easy to test the interpreter (fn [e] (println "Processing: " e) (println "=> " (interpret e (make-env))))) ;; for the environment (def make-env (fn [] '())) (def add-var (fn [env var val] (cons (list var val) env))) (def lookup-var (fn [env var] (cond (empty? env) 'error (= (first (first env)) var) (second (first env)) :else (lookup-var (rest env) var)))) ;; for terms in language ;; -- define numbers (def is-number? (fn [expn] (number? expn))) (def interpret-number (fn [expn env] expn)) ;; -- define symbols (def is-symbol? (fn [expn] (symbol? expn))) (def interpret-symbol (fn [expn env] (lookup-var env expn))) ;; -- define boolean (def is-boolean? (fn [expn] (or (= expn 'true) (= expn 'false)))) (def interpret-boolean (fn [expn env] expn)) ;; -- define functions (def is-function? (fn [expn] (and (list? expn) (= 3 (count expn)) (= 'lambda (first expn))))) (def interpret-function ; keep function definitions as they are written (fn [expn env] expn)) ;; -- define addition (def is-plus? (fn [expn] (and (list? expn) (= 3 (count expn)) (= '+ (first expn))))) (def interpret-plus (fn [expn env] (+ (interpret (second expn) env) (interpret (third expn) env)))) ;; -- define subtraction (def is-minus? (fn [expn] (and (list? expn) (= 3 (count expn)) (= '- (first expn))))) (def interpret-minus (fn [expn env] (- (interpret (second expn) env) (interpret (third expn) env)))) ;; -- define multiplication (def is-times? (fn [expn] (and (list? expn) (= 3 (count expn)) (= '* (first expn))))) (def interpret-times (fn [expn env] (* (interpret (second expn) env) (interpret (third expn) env)))) ;; -- define division (def is-divides? (fn [expn] (and (list? expn) (= 3 (count expn)) (= '/ (first expn))))) (def interpret-divides (fn [expn env] (/ (interpret (second expn) env) (interpret (third expn) env)))) ;; -- define equals test (def is-equals? (fn [expn] (and (list? expn) (= 3 (count expn)) (= '= (first expn))))) (def interpret-equals (fn [expn env] (= (interpret (second expn) env) (interpret (third expn) env)))) ;; -- define greater-than test (def is-greater-than? (fn [expn] (and (list? expn) (= 3 (count expn)) (= '> (first expn))))) (def interpret-greater-than (fn [expn env] (> (interpret (second expn) env) (interpret (third expn) env)))) ;; -- define not (def is-not? (fn [expn] (and (list? expn) (= 2 (count expn)) (= 'not (first expn))))) (def interpret-not (fn [expn env] (not (interpret (second expn) env)))) ;; -- define or (def is-or? (fn [expn] (and (list? expn) (= 3 (count expn)) (= 'or (first expn))))) (def interpret-or (fn [expn env] (or (interpret (second expn) env) (interpret (third expn) env)))) ;; -- define and (def is-and? (fn [expn] (and (list? expn) (= 3 (count expn)) (= 'and (first expn))))) (def interpret-and (fn [expn env] (and (interpret (second expn) env) (interpret (third expn) env)))) ;; -- define print (def is-print? (fn [expn] (and (list? expn) (= 2 (count expn)) (= 'println (first expn))))) (def interpret-print (fn [expn env] (println (interpret (second expn) env)))) ;; -- define with (def is-with? (fn [expn] (and (list? expn) (= 3 (count expn)) (= 'with (first expn))))) (def interpret-with (fn [expn env] (interpret (third expn) (add-var env (first (second expn)) (interpret (second (second expn)) env))))) ;; -- define if (def is-if? (fn [expn] (and (list? expn) (= 4 (count expn)) (= 'if (first expn))))) (def interpret-if (fn [expn env] (cond (interpret (second expn) env) (interpret (third expn) env) :else (interpret (fourth expn) env)))) ;; -- define function-application (def is-function-application? (fn [expn env] (and (list? expn) (= 2 (count expn)) (is-function? (interpret (first expn) env))))) (def interpret-function-application (fn [expn env] (let [function (interpret (first expn) env)] (interpret (third function) (add-var env (first (second function)) (interpret (second expn) env)))))) ;; the interpreter itself (def interpret (fn [expn env] (cond do-trace (println "Interpret is processing: " expn)) (cond ; basic values (is-number? expn) (interpret-number expn env) (is-symbol? expn) (interpret-symbol expn env) (is-boolean? expn) (interpret-boolean expn env) (is-function? expn) (interpret-function expn env) ; built-in functions (is-plus? expn) (interpret-plus expn env) (is-minus? expn) (interpret-minus expn env) (is-times? expn) (interpret-times expn env) (is-divides? expn) (interpret-divides expn env) (is-equals? expn) (interpret-equals expn env) (is-greater-than? expn) (interpret-greater-than expn env) (is-not? expn) (interpret-not expn env) (is-or? expn) (interpret-or expn env) (is-and? expn) (interpret-and expn env) (is-print? expn) (interpret-print expn env) ; special syntax (is-with? expn) (interpret-with expn env) (is-if? expn) (interpret-if expn env) ; functions (is-function-application? expn env) (interpret-function-application expn env) :else 'error))) ;; tests of using environment (println "Environment tests:") (println (add-var (make-env) 'x 1)) (println (add-var (add-var (add-var (make-env) 'x 1) 'y 2) 'x 3)) (println (lookup-var '() 'x)) (println (lookup-var '((x 1)) 'x)) (println (lookup-var '((x 1) (y 2)) 'x)) (println (lookup-var '((x 1) (y 2)) 'y)) (println (lookup-var '((x 3) (y 2) (x 1)) 'x)) ;; examples of using interpreter (println "Interpreter examples:") (run '1) (run '2) (run '(+ 1 2)) (run '(/ (* (+ 4 5) (- 2 4)) 2)) (run '(with (x 1) x)) (run '(with (x 1) (with (y 2) (+ x y)))) (run '(with (x (+ 2 4)) x)) (run 'false) (run '(not false)) (run '(with (x true) (with (y false) (or x y)))) (run '(or (= 3 4) (> 4 3))) (run '(with (x 1) (if (= x 1) 2 3))) (run '(with (x 2) (if (= x 1) 2 3))) (run '((lambda (n) (* 2 n)) 4)) (run '(with (double (lambda (n) (* 2 n))) (double 4))) (run '(with (sum-to (lambda (n) (if (= n 0) 0 (+ n (sum-to (- n 1)))))) (sum-to 100))) (run '(with (x 1) (with (f (lambda (n) (+ n x))) (with (x 2) (println (f 3))))))

    Read the article

  • Facebook like button not going back side on the fixed div

    - by Lahiru Chathuranga
    I added a Facebook like button to my website.My website has a fixed div on top of the page(blue color div in the image). The like button is below that(in a div which can scroll) My problem is when the page is scroll down the like button comes on top of the fixed div(blue color).I want to scroll it from the backside of the div.How can I do that? There are couple of screenshots I added Before Scroll After Scroll Here is my code of the fixed div <script type="text/javascript"> function got_to_signup(){ window.location.href = "view/policy"; } </script> <div id="fb-root"></div> <script>(function(d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = "//connect.facebook.net/en_GB/all.js#xfbml=1&appId=368003049941951"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk'));</script> <div style="width:100%;background-color:#0094d6;" > <div id="dd" style="background-color:#0094d6; width:100%; height:75px;position:fixed; " class="center "><div id="a" style="width:1010px; height:75px; background-color:#000000;background:url(xx.png); background-repeat:no-repeat; font-family:Arial, Helvetica, sans-serif; font-size:11px; color:#003; " class="inner div_border"> <table width="1010" border="0" > <tr > <td width="15%" rowspan="2"><a href="" style="cursor:pointer; cursor:hand;"><div style="width:200px; height:50px;background-color:none;"></div></a></td> <td width="22%" height="14">&nbsp;</td> <td width="5%">&nbsp;</td> <td width="5%">&nbsp;</td> <td width="28%">&nbsp;</td> <td width="2%">&nbsp;</td> <td width="23%">&nbsp;</td> </tr> <tr> <td colspan="4"> </td> <td colspan="2"><span style="float: right; " ><div style="background-color:#006d9e;border-radius:3px; width:250px; height:34px; display: table; vertical-align: middle; color:#FFF; "> <table width="100%" border="0" > <tr > <td width="43%" style="text-align:center"> Start to bump !</td> <td width="29%"><div id='basic-modal'><span style="float: right; " ><input name="login_btn" type="button" class="login_button basic" id="login_btn" value="Sign in" /></span></div></td> <td width="28%"><span style="float: right; " ><form id="form_reg" method="post"><input name="register_btn" type="button" class="register_button" id="register_btn" value="Sign up" onclick="got_to_signup()"/></form></span></td> </tr> </table> </div></span></td> </tr> <tr> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> <td style="color:#FFF; font:Arial, Helvetica, sans-serif; font-size:9px; text-align:right;"> Beta Version </td> </tr> </table> </div></div></div> here is my facebook like button code </script> <div id="fb-root"></div> <script>(function(d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = "//connect.facebook.net/en_GB/all.js#xfbml=1&appId=368003049941951"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk'));</script> <td height="21" colspan="2"> <table width="187" style="margin-left:3px;font-size:1px;background-image:url(share_back.png);background-repeat:no-repeat;border-radius:3px;" > <!--tweeter button--> <tr><td width="71"><a href="https://twitter.com/bump_lk" class="twitter-follow-button" data-show-count="false" style="float:right;">Follow @bump_lk</a> <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script></td> <!--facebook like button--> <td width="48"><div class="fb-like" data-href="https://www.facebook.com/Bump.lk" data-send="false" data-layout="button_count" data-width="10" data-show-faces="false" style="position:relative;"></div> </td></tr></table></td> <td>&nbsp;</td> <td>&nbsp;</td> <td >

    Read the article

  • pfsense peer-to-peer OpenVPN not connecting

    - by John P
    I'm trying to setup a peer-to-peer OpenVPN between two pfsense servers running 2.0.1-RELEASE, but the client keeps getting the connection dropped, with a status of "reconnecting; ping-restart" and nothing appears to be routing between them. Both these firewalls are also doing PPTP VPNs that are working correctly. FW01 ("server") ======================= LAN: 10.1.1.2/24 WAN: xx.xx.126.34/27 ServerMode: Peer to Peer (Shared Key) Protocol: UDP DeviceMode: tun Interface: WAN Port 1194 Tunnel: 10.0.8.1/30 Local Network: 10.1.1.0/24 Remote Network: 192.168.1.0/24 Firewall Rule in OpenVPN tab: UDP * * * * * none FW03 (client) LAN: 192.168.1.2/24 WAN: xx.xx.9.66/27 ServerMode: Peer to Peer (Shared Key) Protocol: UDP DeviceMode: tun Interface: WAN Server Host: xx.xx.126.34 Tunnel: -- also tried 10.1.8.0/24 Remote Network: 10.1.1.0/24 Client Logs: System Log Apr 6 18:00:08 kernel: ... Restarting packages. Apr 6 18:00:13 check_reload_status: Starting packages Apr 6 18:00:19 php: : Restarting/Starting all packages. Apr 6 18:00:56 kernel: ovpnc1: link state changed to DOWN Apr 6 18:00:56 check_reload_status: Reloading filter Apr 6 18:00:57 check_reload_status: Reloading filter Apr 6 18:00:57 kernel: ovpnc1: link state changed to UP Apr 6 18:00:57 check_reload_status: rc.newwanip starting ovpnc1 Apr 6 18:00:57 check_reload_status: Syncing firewall Apr 6 18:01:02 php: : rc.newwanip: Informational is starting ovpnc1. Apr 6 18:01:02 php: : rc.newwanip: on (IP address: ) (interface: ) (real interface: ovpnc1). Apr 6 18:01:02 php: : rc.newwanip: Failed to update IP, restarting... Apr 6 18:01:02 php: : send_event: sent interface reconfigure got ERROR: incomplete command. all reload reconfigure restart newip linkup sync Client OpenVPN log Apr 6 18:39:14 openvpn[12177]: Inactivity timeout (--ping-restart), restarting Apr 6 18:39:14 openvpn[12177]: SIGUSR1[soft,ping-restart] received, process restarting Apr 6 18:39:16 openvpn[12177]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Apr 6 18:39:16 openvpn[12177]: Re-using pre-shared static key Apr 6 18:39:16 openvpn[12177]: Preserving previous TUN/TAP instance: ovpnc1 Apr 6 18:39:16 openvpn[12177]: UDPv4 link local (bound): [AF_INET]64.94.9.66 Apr 6 18:39:16 openvpn[12177]: UDPv4 link remote: [AF_INET]64.74.126.34:1194 Server OpenVPN log Apr 6 14:40:36 openvpn[22117]: UDPv4 link remote: [undef] Apr 6 14:40:36 openvpn[22117]: UDPv4 link local (bound): [AF_INET]xx.xx.126.34:1194 Apr 6 14:40:36 openvpn[21006]: /usr/local/sbin/ovpn-linkup ovpns1 1500 1557 10.1.8.1 10.1.8.2 init Apr 6 14:40:36 openvpn[21006]: /sbin/ifconfig ovpns1 10.1.8.1 10.1.8.2 mtu 1500 netmask 255.255.255.255 up Apr 6 14:40:36 openvpn[21006]: do_ifconfig, tt-ipv6=0, tt-did_ifconfig_ipv6_setup=0 Apr 6 14:40:36 openvpn[21006]: TUN/TAP device /dev/tun1 opened Apr 6 14:40:36 openvpn[21006]: Control Channel Authentication: using '/var/etc/openvpn/server1.tls-auth' as a OpenVPN static key file Apr 6 14:40:36 openvpn[21006]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Apr 6 14:40:36 openvpn[21006]: OpenVPN 2.2.0 amd64-portbld-freebsd8.1 [SSL] [LZO2] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Aug 11 2011 Apr 6 14:40:36 openvpn[17171]: SIGTERM[hard,] received, process exiting Apr 6 14:40:36 openvpn[17171]: /usr/local/sbin/ovpn-linkdown ovpns1 1500 1557 10.1.8.1 10.1.8.2 init Apr 6 14:40:36 openvpn[17171]: ERROR: FreeBSD route delete command failed: external program exited with error status: 1 Apr 6 14:40:36 openvpn[17171]: event_wait : Interrupted system call (code=4) Apr 6 14:06:32 openvpn[17171]: Initialization Sequence Completed Apr 6 14:06:32 openvpn[17171]: UDPv4 link remote: [undef] Apr 6 14:06:32 openvpn[17171]: UDPv4 link local (bound): [AF_INET]xx.xx.126.34:1194

    Read the article

  • Update php 5.2.0 to 5.2.4 with aptitude

    - by Kiva
    Hi guy, I would like to update my php 5 in my server. At this moment, I use php 5.2.0 so I want to update it to php 5.2.4 (not php 5.3). I tried to do this: aptitude update aptitude upgrade 63 packets were updated but not php which is always in 5.0 How can I update my php please ? Here is the output of commands asked by David in another post: aptitude search php5 p libapache-mod-php5 - server-side, HTML-embedded scripting langu i A libapache2-mod-php5 - server-side, HTML-embedded scripting langu i php5 - server-side, HTML-embedded scripting langu p php5-apache2-mod-bt - PHP bindings for mod_bt p php5-auth-pam - A PHP5 extension for PAM authentication i php5-cgi - server-side, HTML-embedded scripting langu p php5-clamavlib - PHP ClamAV Lib - ClamAV Interface for PHP5 p php5-cli - command-line interpreter for the php5 scri i A php5-common - Common files for packages built from the p i php5-curl - CURL module for php5 p php5-dev - Files for PHP5 module development i A php5-gd - GD module for php5 p php5-idn - PHP api for the IDNA library p php5-imagick - ImageMagick module for php5 p php5-imap - IMAP module for php5 p php5-interbase - interbase/firebird module for php5 p php5-json - JSON serialiser for PHP5 p php5-ldap - LDAP module for php5 p php5-mapscript - module for php5-cgi to use mapserver p php5-maxdb - PHP extension to access MaxDB databases fo i A php5-mcrypt - MCrypt module for php5 p php5-memcache - memcache extension module for PHP5 p php5-mhash - MHASH module for php5 p php5-ming - Ming module for php5 i A php5-mysql - MySQL module for php5 p php5-odbc - ODBC module for php5 p php5-pgsql - PostgreSQL module for php5 p php5-ps - ps module for PHP 5 p php5-pspell - pspell module for php5 p php5-radius - PECL radius module for PHP 5 p php5-recode - recode module for php5 p php5-snmp - SNMP module for php5 p php5-sqlite - SQLite module for php5 p php5-sqlite3 - SQLite3 module for php5 p php5-sqlrelay - SQL Relay PHP API p php5-suhosin - advanced protection module for php5 p php5-sybase - Sybase / MS SQL Server module for php5 p php5-tidy - tidy module for php5 p php5-uuid - OSSP uuid module for php5 p php5-xapian - Xapian search engine interface for PHP5 p php5-xcache - Fast, stable PHP opcode cacher p php5-xmlrpc - XML-RPC module for php5 p php5-xsl - XSL module for php5 aptitude show php5 | grep Version Version : 5.2.0-8+etch13 aptitude show php5-cgi | grep Version Version : 5.2.0-8+etch13 php5 --version -bash: php5: command not found php-cgi --version PHP 5.2.0-8+etch13 (cgi-fcgi) (built: Oct 2 2008 08:21:17) Copyright (c) 1997-2006 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2006 Zend Technologies

    Read the article

  • `svn checkout` on the SVN server causes the repo to break with a 301 error

    - by Phillip Oldham
    We have an nginx server which proxies to a standard set-up of Apache+SVN. The nginx set-up is a very simple proxy: server { server_name svn.ourdomain.tld; location / { proxy_pass http://localhost:8080; } } Apache is set-up as follows: <Location /> DAV svn SVNParentPath /var/svn AuthType Basic AuthName "Authentication Required" AuthUserFile /var/svn/.auth Require valid-user </Location> ...which allows us to access repositories using something like http://svn.ourdomain.tld/repo. We've been running this set-up now for about 2 years without issue. Recently we've found that we need to check out one of the repositories onto the server itself, however whenever we do so it seems to break the repo. From that point on, it will only respond with a 301 Moved Permanently error. We've tried: svn co file:///path/to/repo svn co svn://localhost/repo svn co svn://svn.ourdomain.tld/repo svn co svn+ssh://localhost/repo svn co svn+ssh://svn.ourdomain.tld/repo svn co http://localhost/repo svn co http://svn.ourdomain.tld/repo Also tried bypassing nginx, and get the same error: svn co http://localhost:8080/repo svn co http://svn.ourdomain.tld:8080/repo Checking out from a different machine works as expected until we attempt to check out on the server, after that it refuses with the same 301 error. What is more confusing is that this repository server also hosts our HudsonCI server, which can pulls and builds our projects hourly. This leads us to suspect that it's the svn client which is causing an error in communication. Its also very confusing that removing then re-creating the repo using svnadmin doesn't reset the error - the repo is still unavailable even though it's "new"! Restarting apache and subversion (svnserve) has no effect on this, or the original error. Version information: OS: 64-bit CentOS 4.2, 2.6.27 kernel svn client: 1.4.2 (same for both server and remote clients) svn server: 1.4.2 httpd: 2.2.3 UPDATE: This also happens with svn export when run on the repo server. Ran from any other box/client, there isn't a problem. Here's the workflow, to help clarify the error: [~repo-server~]# svnadmin create {repo}; chown -Rf www:www {repo} [remote-client]# svn checkout http://svn.ourdomain.tld/repo [remote-client]# svn add file; svn ci -m '' [~repo-server~]# cd /var/www; svn export file:///path/to/repo/trunk ourproject [remote-client]# svn update fails with 301 error I can also confirm that the hostname of the box doesn't have an effect here, which is very odd: whether or not svn.ourdomain.tld is added to /etc/hosts it still breaks - we thought it could be an issue with localhost routing, but that doesn't seem to be the case. Are we missing something in the documentation which states you can't checkout a repo when the server is on the same box? How can we stop the repos becoming corrupt when we checkout locally?

    Read the article

  • How do I get a .Net 4.0 app to coexist as an application under a SharePoint 2007 website in IIS v7?

    - by Craig Nakamoto
    I have created a small .Net 4.0 website and installed it on my SharePoint server as a separate web site in IIS v7 (using port 8008 for now). I had to install the .Net 4 framework, set up the database, etc. and this all went smoothly and my app works as a standalone website. Now I am trying to get pages from my website to show up in SharePoint 2007. For various reasons (the SharePoint site is using SSL, security, etc.) I now need to move my .Net app to run under the SharePoint 2007 site in IIS. I have added it as an 'Application' and set it up with the same .Net v4 application pool and settings that were working when it was set up as a standalone site. Now when I try to access the application I get the error at the end of this description. Any help would be greatly appreciated. I already tried following the instructions on this post: http://blogs.msdn.com/sgoodyear/archive/2007/05/07/custom-web-applications-coexisting-with-sharepoint-2007.aspx but that did not help. Thanks! Craig p.s. Here are the error details: Log Name: Application Source: ASP.NET 4.0.30319.0 Date: 11/05/2010 11:49:31 AM Event ID: 1310 Task Category: Web Event Level: Warning Keywords: Classic User: N/A Computer: GGI-SP1.ggi.ca Description: Event code: 3008 Event message: A configuration error has occurred. Event time: 11/05/2010 11:49:31 AM Event time (UTC): 11/05/2010 3:49:31 PM Event ID: 559d7ac619344f3499a4a31c6c9e58cd Event sequence: 1 Event occurrence: 1 Event detail code: 0 Application information: Application domain: /LM/W3SVC/1653978112/ROOT/bidmonitor-1-129180665715766107 Trust level: Application Virtual Path: /bidmonitor Application Path: C:\inetpub\wwwroot\bidmonitor\ Machine name: GGI-SP1 Process information: Process ID: 5272 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: ConfigurationErrorsException Exception message: Could not find permission set named 'ASP.Net'. at System.Web.Hosting.ApplicationManager.CreateAppDomainWithHostingEnvironment(String appId, IApplicationHost appHost, HostingEnvironmentParameters hostingParameters) at System.Web.HttpRuntime.HostingInit(HostingEnvironmentFlags hostingFlags, PolicyLevel policyLevel, Exception appDomainCreationException) Request information: Request URL: gginet.ggi.ca/bidmonitor Request path: /bidmonitor User host address: 10.10.1.33 User: Is authenticated: False Authentication Type: Thread account name: NT AUTHORITY\NETWORK SERVICE Thread information: Thread ID: 3 Thread account name: NT AUTHORITY\NETWORK SERVICE Is impersonating: False Stack trace: at System.Web.HttpRuntime.HostingInit(HostingEnvironmentFlags hostingFlags, PolicyLevel policyLevel, Exception appDomainCreationException)

    Read the article

  • connect() failed (111: Connection refused) while connecting to upstream

    - by Burning the Codeigniter
    I'm experiencing 502 gateway errors when accessing a PHP file in a directory (http://domain.com/dev/index.php), the logs simply says this: 2011/09/30 23:47:54 [error] 31160#0: *35 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xx.xx.xx, server: domain.com, request: "GET /dev/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "domain.com" I've never experienced this before, how do I do a solution for this type of 502 gateway error? This is the nginx.conf: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } #mail { # # See sample authentication script at: # # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript # # # auth_http localhost/auth.php; # # pop3_capabilities "TOP" "USER"; # # imap_capabilities "IMAP4rev1" "UIDPLUS"; # # server { # listen localhost:110; # protocol pop3; # proxy on; # } # # server { # listen localhost:143; # protocol imap; # proxy on; # } #}

    Read the article

  • 500 internal server error on certain page after a few hours

    - by Brian Leach
    I am getting a 500 Internal Server Error on a certain page of my site after a few hours of being up. I restart uWSGI instance with uwsgi --ini /home/metheuser/webapps/ers_portal/ers_portal_uwsgi.ini and it works again for a few hours. The rest of the site seems to be working. When I navigate to my_table, I am directed to the login page. But, I get the 500 error on my table page on login. I followed the instructions here to set up my nginx and uwsgi configs. That is, I have ers_portal_nginx.conf located i my app folder that is symlinked to /etc/nginx/conf.d/. I start my uWSGI "instance" (not sure what exactly to call it) in a Screen instance as mentioned above, with the .ini file located in my app folder My ers_portal_nginx.conf: server { listen 80; server_name www.mydomain.com; location / { try_files $uri @app; } location @app { include uwsgi_params; uwsgi_pass unix:/home/metheuser/webapps/ers_portal/run_web_uwsgi.sock; } } My ers_portal_uwsgi.ini: [uwsgi] #user info uid = metheuser gid = ers_group #application's base folder base = /home/metheuser/webapps/ers_portal #python module to import app = run_web module = %(app) home = %(base)/ers_portal_venv pythonpath = %(base) #socket file's location socket = /home/metheuser/webapps/ers_portal/%n.sock #permissions for the socket file chmod-socket = 666 #uwsgi varible only, does not relate to your flask application callable = app #location of log files logto = /home/metheuser/webapps/ers_portal/logs/%n.log Relevant parts of my views.py data_modification_time = None data = None def reload_data(): global data_modification_time, data, sites, column_names filename = '/home/metheuser/webapps/ers_portal/app/static/' + ec.dd_filename mtime = os.stat(filename).st_mtime if data_modification_time != mtime: data_modification_time = mtime with open(filename) as f: data = pickle.load(f) return data @a bunch of authentication stuff... @app.route('/') @app.route('/index') def index(): return render_template("index.html", title = 'Main',) @app.route('/login', methods = ['GET', 'POST']) def login(): login stuff... @app.route('/my_table') @login_required def my_table(): print 'trying to access data table...' data = reload_data() return render_template("my_table.html", title = "Rundata Viewer", sts = sites, cn = column_names, data = data) # dictionary of data I installed nginx via yum as described here (yesterday) I am using uWSGI installed in my venv via pip I am on CentOS 6 My uwsgi log shows: Wed Jun 11 17:20:01 2014 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 287] during GET /whm-server-status (127.0.0.1) IOError: write error [pid: 9586|app: 0|req: 135/135] 127.0.0.1 () {24 vars in 292 bytes} [Wed Jun 11 17:20:01 2014] GET /whm-server-status => generated 0 bytes in 3 msecs (HTTP/1.0 404) 2 headers in 0 bytes (0 switches on core 0) When its working, the print statement in the views "my_table" route prints into the log file. But not once it stops working. Any ideas?

    Read the article

  • mount_afp on linux, user rights

    - by Antonio Sesto
    I need to mount a remote filesystem on a linux box using the afp protocol. The linux box runs an old Debian 4. I downloaded the source code of mount_afp, compiled it and installed it with all the required packages. Then created /etc/fuse with the following command: mknod /dev/fuse c 10 229 (according to the instructions here) I can mount the remote filesystem as root by executing: mount_afp afp://USER:PASSWD@REMOTE_SERVER/FOLDER /mnt/MOUNTPOINT/ but the same command fails when run as normal user (of the local machine). After reading here and there, I created a group fuse, and added my normal user U to the group fuse: [prompt] groups U U fuse Then modified the group of /dev/fuse, that now has the following rights: 0 crwxrwx--- 1 root fuse 10, 229 Feb 8 15:33 /dev/fuse However, if the user U tries to mount the remote filesystem by using the same command as above, U gets the following error: Incorrect permissions on /dev/fuse, mode of device is 20770, uid/gid is 0/1007. But your effective uid/gid is 1004/1004 But the user U with uid 1004 has also gid 1007 (group fuse). I might think the problem is related to real/effective/etc. ID, but I do not know how to proceed and could not find any clear instructions. Could you please help me? There is also another problem. If I mount /mnt/MOUNTPOINT as root and run ls -l /mnt, I get: drwxrwxrwx 15 root root 466 Feb 8 16:34 MONTPOINT If I run ls -l /mnt as normal user U I get: ? ?????????? ? ? ? ? ? MOUNTPOINT in fact when I try to cd /mnt/MOUNTPOINT I get: $-> cd /mnt/MOUNTPOINT -sh: cd: /mnt/MOUNTPOINT: Not a directory Then I unmount /mnt/MOUNTPOINT as root and run again ls -l /mnt as normal user U I get: 0 drwxr-xr-x 2 root root 6 Feb 8 15:32 MOUNTPOINT/ After reading Frank's answer, I killed every shell/process running with privileges of user U. Still U cannot mount the remote filesystem, but the error message has changed. Now it is: "Login error: Authentication failed". The problem is not related to remote login/password since the same command works perfectly when run as root of the local machine. Since I cannot get mount_afp to work with normal users, I decided to follow mgorven's suggestion. So I run the commands: mount_afp -o allow_other afp://USER:PASSWD@REMOTE_SERVER/FOLDER /mnt/MOUNTPOINT/ and mount_afp -o user=U afp://USER:PASSWD@REMOTE_SERVER/FOLDER /mnt/MOUNTPOINT/ The mount succeeds but user U cannot access the mount point. If U executes ls -l in /mnt U@LOCAL_HOST [/mnt] $-> ls -l ls: cannot access MOUNT_POINT: Permission denied total 0 ? ?????????? ? ? ? ? ? MOUNT_POINT Is it so hard to have this utility working?

    Read the article

  • passwd ldap request to ActiveDirectory fails on half of 2500 users

    - by groovehunter
    We just setup ActiveDirectory in my company and imported all linux users and groups. On the linux client: (configured to ask ldap in nsswitch.conf): If i do a common ldapsearch to the AD ldap server i get the complete number of about 2580 users. But if i do this it only gets a part of all users, 1221 in number: getent passwd | wc -l Running it with strace shows kind of attempt to reconnect My ideas were: Does the linux authentication procedure run ldapsearch with a parameter incompatible to AD ldap ? Or probably it is a encoding issue. The windows user are entered in AD with all kind of characters. Maybe someone could shed light on this and give a hint how to debug that further!? Here's our ldap.conf host audc01.mycompany.de audc03.mycompany.de base ou=location,dc=mycompany,dc=de ldap_version 3 binddn cn=manager,ou=location,dc=mycompany,dc=de bindpw Password timelimit 120 idle_timelimit 3600 nss_base_passwd cn=users,cn=import,ou=location,dc=mycompany,dc=de?sub nss_base_group ou=location,dc=mycompany,dc=de?sub # RFC 2307 (AD) mappings nss_map_objectclass posixAccount User # nss_map_objectclass shadowAccount User nss_map_objectclass posixGroup Group nss_map_attribute uid sAMAccountName nss_map_attribute cn sAMAccountName # Display Name nss_map_attribute gecos cn ## nss_map_attribute homeDirectory unixHomeDirectory nss_map_attribute loginShell msSFU30LoginShell # PAM attributes pam_login_attribute sAMAccountName # Location based login pam_groupdn CN=Location-AU-Login,OU=au,OU=Location,DC=mycompany,DC=de pam_member_attribute msSFU30PosixMember ## pam_lookup_policy yes pam_filter objectclass=User nss_initgroups_ignoreusers avahi,avahi-autoipd,backup,bin,couchdb,daemon,games,gdm,gnats,haldaemon,hplip,irc,kernoops,libuuid,list,lp,mail,man,messagebus,news,proxy,pulse,root,rtkit,saned,speech-dispatcher,statd,sync,sys,syslog,usbmux,uucp,www-data and here the stacktrace from strace getent passwd poll([{fd=4, events=POLLIN|POLLPRI|POLLERR|POLLHUP}], 1, 120000) = 1 ([{fd=4, revents=POLLIN}]) read(4, "0\204\0\0\0A\2\1", 8) = 8 read(4, "\4e\204\0\0\0\7\n\1\0\4\0\4\0\240\204\0\0\0+0\204\0\0\0%\4\0261.2."..., 63) = 63 stat64("/etc/ldap.conf", {st_mode=S_IFREG|0644, st_size=1151, ...}) = 0 geteuid32() = 12560 getsockname(4, {sa_family=AF_INET, sin_port=htons(60334), sin_addr=inet_addr("10.1.35.51")}, [16]) = 0 getpeername(4, {sa_family=AF_INET, sin_port=htons(389), sin_addr=inet_addr("10.1.5.81")}, [16]) = 0 time(NULL) = 1297684722 rt_sigaction(SIGPIPE, {SIG_DFL, [], 0}, NULL, 8) = 0 munmap(0xb7617000, 1721) = 0 close(3) = 0 rt_sigaction(SIGPIPE, {SIG_IGN, [], 0}, {SIG_DFL, [], 0}, 8) = 0 rt_sigaction(SIGPIPE, {SIG_DFL, [], 0}, NULL, 8) = 0 rt_sigaction(SIGPIPE, {SIG_IGN, [], 0}, {SIG_DFL, [], 0}, 8) = 0 write(4, "0\5\2\1\5B\0", 7) = 7 shutdown(4, 2 /* send and receive */) = 0 close(4) = 0 shutdown(-1, 2 /* send and receive */) = -1 EBADF (Bad file descriptor) close(-1) = -1 EBADF (Bad file descriptor) exit_group(0) = ?

    Read the article

  • I can't send email from my server to gmail addresses

    - by brianegge
    I'm using postfix, and have setup spf, dkim, and domainkeys. I can get my email to go to Yahoo, but not gmail. Here's the headers from an email send to Yahoo. Yahoo reports the email as domain key verified. X-Apparently-To: brianegge at yahoo.com via 68.142.206.167; Sat, 20 Mar 2010 05:29:19 -0700 Return-Path: <domains at theeggeadventure.com> X-YahooFilteredBulk: 67.207.137.114 X-YMailISG: x7_Rl9EWLDuugoqPcORhih0FeQMOaIIpz4qfuu9ttx1xbo3uKI2kz.CLUy2cJ1BTtHAwuJtrsGRsveHIx.Dx95avNGlPPGWy_cSpnEwWLXGxBciO.YgtSQxdURQiWLCLvbHej0QPjQIHFjAFjdeGhJd2Y8NgTW1wcExq45Sb7LMlOGvtGMjSQuc8QazwXUxpZrQbIxgSQUTmzQO1x30xaZ2Us6TQTab7Wpya6OgAX.emKOM3phfS5kfhYj9FLQ.qi32sFNWnAoFdVK596OTP2F63PAJOVM5qPsM2jIAbJylIBmnj94LO7hOVr3KOS6XLtCPRn2Oe X-Originating-IP: [67.207.137.114] Authentication-Results: mta1055.mail.mud.yahoo.com from=theeggeadventure.com; domainkeys=pass (ok); from=theeggeadventure.com; dkim=pass (ok) Received: from 127.0.0.1 (EHLO mail.theeggeadventure.com) (67.207.137.114) by mta1055.mail.mud.yahoo.com with SMTP; Sat, 20 Mar 2010 05:29:19 -0700 Received: by mail.theeggeadventure.com (Postfix, from userid 1003) id BB5B01C16A4; Sat, 20 Mar 2010 12:29:16 +0000 (UTC) DomainKey-Signature: a=rsa-sha1; s=2010; d=theeggeadventure.com; c=simple; q=dns; b=JHbK9VhqyQTfpQFqaXxJrKpEG9h9H0IZ0LdWoBooJEA7hv3SYWmFUtyE247EuwoaG gzApKJ1DuRhwESZ7PswrbzuaUL8poAUO8LmMvZ+OqnDolgNSJUYWu0FcO+fe3H4m9ZD grkj0xMpHw+uFjXV4plKO+sa8olJXJAmP+9cMEo= X-DKIM: Sendmail DKIM Filter v2.8.2 mail.theeggeadventure.com BB5B01C16A4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=theeggeadventure.com; s=2010; t=1269088156; bh=bUlMldcnzFCmCmNT8qjpRl6fiY1YyjiZiC9jhCXASOw=; h=Subject:To:Message-Id:Date:From; b=EVNolTlh4Gch5/HIrrHaRQvcApl7wkB42gB44NsPcLZD2QrhuOvnhanhnEB4UbV0e A+3dAOjhX7LKzgGrn11jXNTiEjNX1vQDsX3HyG0fNra73aWiGTzr1nHJfnuEJ7Ph0j 5tp0HRL5jjikD1XJcvmsYzTpT22mxuz60HXYRB1s= Subject: cron To: <brianegge at yahoo.com> X-Mailer: mail (GNU Mailutils 1.2) Message-Id: <[email protected]> Date: Sat, 20 Mar 2010 12:29:16 +0000 (UTC) From: This sender is DomainKeys verified [email protected] (domains) View contact details Content-Length: 818 When I send to gmail, I see the following in my server log, but the message doesn't even reach my spam folder. Mar 20 12:59:12 Everest postfix/pickup[27802]: C81C61C16A4: uid=1000 from=<egge> Mar 20 12:59:12 Everest postfix/cleanup[27847]: C81C61C16A4: message-id=<[email protected]> Mar 20 12:59:13 Everest postfix/qmgr[27801]: C81C61C16A4: from=<[email protected]>, size=2784, nrcpt=1 (queue active) Mar 20 12:59:14 Everest postfix/smtp[27849]: C81C61C16A4: to=<brianegge at gmail.com>, relay=gmail-smtp-in.l.google.com[209.85.223.24]:25, delay=2.1, delays=0.39/0.28/0.13/1.3, dsn=2.0.0, status=sent (250 2.0.0 OK 1269089954 32si4566750iwn.51) Mar 20 12:59:14 Everest postfix/qmgr[27801]: C81C61C16A4: removed I've send to email to test services, and the report everything verifies ok. I've also checked all the RBL lists, and I'm not on any of them.

    Read the article

< Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >