Search Results

Search found 40999 results on 1640 pages for 'duplicate files'.

Page 374/1640 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • Boost and XML (c++)

    - by Nuno
    Hi, Is there any good way (and a simple way too) using boost to read and write xml files? I can't seem to find any simple sample to read xml files using boost? (can you point me some simple sample that use boost for reading and writing xml files) If not boost, is out there any good and simple library to read and write xml files that you can recommend? (it must be a c++ library) Thanks Nuno

    Read the article

  • peoblem with download file on zend framework

    - by user1400
    hello all i am using upload files to server in my application, it works fine , i want other users could download these file ,but i get error i created a upload folder in public folder and i upload my files in upload folder now when i create a link (<a href="http://mytest/public/1.jpg">download image</a>) to these files i get error "The requested URL /public/upload/1.jpg was not found on this server." how should i use routing for download these files? someone may help please thanks

    Read the article

  • How to resolve merging conflicts in Mercurial (v1.0.2)?

    - by lajos
    I have a merging conflict, using Mercurial 1.0.2: merging test.h warning: conflicts during merge. merging test.h failed! 6 files updated, 0 files merged, 0 files removed, 1 files unresolved There are unresolved merges, you can redo the full merge using: hg update -C 19 hg merge 18 I can't figure out how to resolve this. Google search results instruct to use: hg resolve but for some reason my Mercurial (v1.0.2) doesn't have a resolve command: hg: unknown command 'resolve' How can I resolve this conflict?

    Read the article

  • how to invoke an activity of a library project from an android apps

    - by Austin
    I have an open source android code that I need to use in my android apps. It has all the source code as well as resource files, manifest files and class path. It can be compiled as a separate android apps. I have constraints for using the open source. 1. I can't change a single line of code. 2. I can't use it as a separate apps. These constraints are non negotiable. What I have done is I have compiled the open source as class library(in Eclipse: Project Properties-Android- Tick check box Is Library). This has resulted in generation of .class files(in bin) for the java files and resource files. This open source has an android activity that i want to open from my application. So I have linked the directory of these sets of class files in the source section of my java build path( in .classpath). I have declared the activity in my manifest file with proper action intent filters. Now when I am trying to call activity from my code, its not working. Cleaning and rebuilding doesn't help. However, if I build the open source project and my apps in the same workspace of eclipse and link the open source in my apps in exact same manner it works fine. I am not able to identify the difference. All settings seems to be same(all files are identical in both the cases). But only in the second case it works. I have tried it as jar file also. I have build the open source as project library and exported it into a jar file(excluding manifest file). But in that case I am getting the following error UNEXPECTED TOP-LEVEL EXCEPTION: java.lang.IllegalArgumentException: already added: .... Conversion to Dalvik format failed with error 1 This I guess is coming because the android library(2.2) has been included twice in my apps( one for building my apps & another for building the open source). I dont know how to avoid this. Cleaning the project doesn't help. What i require is to use the open source and invoking it's activities in my apps without violating the constraints. If i can use the open source as bunch of .class files then great, or else any other way will do fine. Please look into it and let me know. Thanks

    Read the article

  • doxygen with IDL/ODL

    - by John
    If you have a C++ project that has a bunch of .ODL files and the generated .h files from the ODL compiler, should doxygen be told to parse both .odl and .h, or only one or the other? In general I don't like documenting generated code but IDL is sort of a special case. In any case, it seems like the member listing of ODL files is not quite working properly in my tests, are ODL files properly parsed?

    Read the article

  • RewriteRule - how to redirect from a folder within a folder to a new domain?

    - by eb_Dev
    Hi, I've been struggling with the following rule: RewriteRule ^subdomains/example.com/(.*)$ http://www.example.com/$1 [R=301,L] I'm trying to redirect anything that occurs after the folder /subdomains/example.com/ to http://www.example.com/ whilst including any filename or extra folder path information. E.g: www.olddomain.com/subdomains/example.com/index.html - www.example.com/index.html www.olddomain.com/subdomains/example.com/files/ - www.example.com/files/ www.olddomain.com/subdomains/example.com/files/index.html - www.example.com/files/index.html Any help would be greatly appreciated! Thanks, eb_dev

    Read the article

  • How do I force git to use LF under windows?

    - by Sorin Sbarnea
    I want to force git to checkout files under Windows using just LF not CR+LF. I checked the two configuration options but I was not able to find the right combination of settings. I want it to convert all files to LF and keep the LF on the files. Remark: I used autocrlf = input but this just repairs the files when you commit them. I want to force it to get them using 'LF'.

    Read the article

  • pre-update svn script to filter what get

    - by DrLuk
    Imagine a repository with many kind of files. Then, I want to get from this repository just some kind of files in a "filter process". I mean ALL FILES are versioned. But to my local work, I just wanna i.e get *.php files, ignoring download *.jpg instead. I think about client-site hook script (pre-update). Anyone know if is it possible? Thanks!

    Read the article

  • C++ code generation with Python

    - by norapinephrine
    Can anyone point me to some documentation on how to write scripts in Python (or Perl or any other Linux friendly script language) that generate C++ code from XML or py files from the command line. I'd like to be able to write up some xml files and then run a shell command that reads these files and generates .h files with fully inlined functions, e.g. streaming operators, constructors, etc.

    Read the article

  • remove the content in directory and subdirectory hierarichally with out distroy the directory structure

    - by user3713876
    In shell script, I want to clear only text files and log files in the following structure with out removing the directory as well as subdirectories | |------bar/ | |---file1.txt |---file2.txt | |---subdir1/ | |---file1.log | |---file2.log | |---subdir2/ |---image1.log |---image2.log I am using rm -rf /bar/* so I am getting the result as follows. |------bar/ but I want the output like following | |------bar/ | | | | |---subdir1/ | | | |---subdir2/ I want to remove only text files or log files or csv with out removing the directory and the subdirectories

    Read the article

  • Why I am getting "Problem loading the page" after enabling HTTPS for Apache on Windows 7?

    - by Anish
    I enabled HTTPS on the Apache server (2.2.15) Windows 7 Enterprise by uncommenting: Include /private/etc/apache2/extra/httpd-ssl.conf in C:\Program Files (x86)\Apache Software Foundation\Apache2.2\conf\httpd.conf and modifying C:\Program Files (x86)\Apache Software Foundation\Apache2.2\conf\httpd-ssl.conf to include: DocumentRoot "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/htdocs" ServerName myserver.com:443 ServerAdmin [email protected] ... SSLCertificateFile "SSLCertificateFile "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/cert.pem SSLCertificateKeyFile "SSLCertificateFile "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/key.pem" Then I restart apache (going to start-All Progranms-Apache Server 2.2-Control-restart) and go to localhost on port 443 in Firefox , where I get: <<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"> <html> <head> <title>Index of /</title> </head> <body> <h1>Index of /</h1> <ul><li><a href="MyPageLinks/"> Links/</a></li> ..... .... </ul> </body></html> But on Display of WebPage I see: Unable to connect Firefox can't establish a connection to the server at localhost. *The site could be temporarily unavailable or too busy. Try again in a few moments. *If you are unable to load any pages, check your computer's network onnection. *If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web. I read: Why am I getting 403 Forbidden after enabling HTTPS for Apache on Mac OS X? and added default web server configuration block to match my DocumentRoot The error Log C:\Program Files (x86)\Apache Software Foundation\Apache2.2\logs\error.log gives following error: The Apache2.2 service is running. (OS 5)Access is denied. : Init: Can't open server certificate file C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/cert.pem I checked the permissions for cert.pem and it indicates: All the permissions (Full control, Read, Read and modify, execute, Write) are marked for Admin and I am currently logged in as Admin. I tried using oldcert.pem and oldkey.pem on the same server and it works fine. Is there anything that I missed?

    Read the article

  • Slow wifi from Windows Server 2003 virtualized in XenServer

    - by John Clayton
    I'm a brand spanking new user of OS X, coming from a lifetime of Windows use. I've been setting up my new Macbook Pro and have run into a very unusual problem. Over wifi, I am unable to copy files to or from my Windows Home Server. The problem seems to exist only over wifi, and only to WHS. Here are the details of my setup: 2010 Macbook Pro (Core i7), OS X 10.6.3 Windows Home Server PP3 (virtualized in XenServer 5.5) Windows 7 Ultimate x64 desktop Windows 7 Ultimate x64 in Boot Camp D-Link DIR-655 wireless N router Here is what I've done to narrow down the problem: Files copy fine from WHS to OS X when using gigabit ethernet Files copy fine from desktop to OS X when using gigabit ethernet Files fail to copy from WHS to OS X when using wifi (error -51) Files copy fine from desktop to OS X when using wifi Files copy fine from WHS to Boot Camp when using wifi Files copy fine from desktop to Boot Camp when using wifi From what I can tell, it seems to be some sort of issue between OS X and WHS, but I can't for the life of me see what would be different between shares on WHS and my desktop. They are both connected using smb://ADDRESS (I've tried both by IP and name). I can browse the shares on the WHS, but copying to OS X fails. I originally found the issue while installing VS2010 off an ISO from WHS, mounted to a Windows 7 VM using VMware Fusion. During the installation the VM was unusable - even the clock got behind the host be about 8 minutes. Once I plugged in the ethernet and disabled the wifi things picked up and finished quickly. The Fusion 3.1 RC is the only I think of that I installed that may have messed with the wifi driver. I've also tried resetting the wifi router, and have changed it from being G & N to N-only. Under Boot Camp I get similar speeds as my wife's N laptop. Any ideas? Thanks! Update: The issue has been further narrowed down to Windows Server 2003, which Windows Home Server is based on, running in XenServer with the XenServer tools installed.

    Read the article

  • Windows 7 file-based backup service

    - by Ben Voigt
    I'm looking for a good replacement for Lazy Mirror, since it doesn't support Windows 7 well. Pros: One of the things I really loved about Lazy Mirror is that it always maintains a "full" backup, but does so by only copying modified files. As each file was copied, the old version got archived (moved to an out-of-the-way location). So after mirroring ran, there'd be a complete copy of the file system, which could even be booted if necessary. At the same time, extra space on the backup media was used to store as many older versions of files as possible, without wasting space storing multiple copies of the same version. It seems that with Windows 7 backup, there'd be wasted space storing the same data in both the system image and file backup. It was completely file-based, but also aware of the registry (it had a feature to dump the live registry to hive files in the correct format). The backups were normal NTFS filesystems, no special tool was needed to read them. It automatically cleaned out the oldest previous versions when space ran out (unlike Windows 7 backup which apparently simply starts failing the the backup media fills.) It copied all file attributes including security. Cons: It doesn't deal well with junction points, symbolic links, and hard links. It didn't run as a service without lots of help from firesrv or srvany, and then you couldn't interact with the GUI. Running as a service was necessary to be able to mirror protected OS files. It didn't have open file handling, except for registry hives. I guess that the file-by-file archive and replacement could leave mismatched sets of files, if the mirror was interrupted. This would be the advantage of incremental backup techniques that require old full backup + all intermediate incremental backups to restore. But I don't see this as presenting much of a problem, you'd really only have a boot failure if you had a mixture of pre- and post-service pack files, and I can run a full image backup using another tool before applying a service pack. Does anyone know of a tool that does both full-system backup and storage of old versions of files like Lazy Mirror did (without storing the same data multiple times), and also can run as a service in Windows 7? Free is best of course, but a reasonably priced paid program (e.g. It would be absolutely awesome if it also triggered a backup/mirror pass when a particular external drive was plugged in and generated popup warnings if backups hadn't been run recently)

    Read the article

  • APC File Cache not working but user cache is fine

    - by danishgoel
    I have just got a VPS (with cPanel/WHM) to test what gains i could get in my application with using apc file cache AND user cache. So firstly I got the PHP 5.3 compiled in as a DSO (apache module). Then installed APC via PECL through SSH. (First I tried with WHM Module installer, it also had the same problem, so I tried it via ssh) All seemed fine and phpinfo showed apc loaded and enabled. Then I checked with apc.php. All seemed OK But as I started testing my php application, the stats in apc for File Cache Information state: Cached Files 0 ( 0.0 Bytes) Hits 1 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Which meant no PHP files were being cached, even though I had browsed through over 10 PHP files having multiple includes. So there must have been some Cached Files. But the user cache is functioning fine. User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 1000 Misses 1000 Request Rate (hits, misses) 0.84 cache requests/second Hit Rate 0.42 cache requests/second Miss Rate 0.42 cache requests/second Insert Rate 0.84 cache requests/second Cache full count 0 Its actually from an APC caching test script which tries to retrieve and store 1000 entries and gives me the times. A sort of simple benchmark. Can anyone help me here. Even though apc.cache_by_default = 1, no php files are being cached. This is my apc config Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 1M apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 32M apc.slam_defense 1 apc.stat 1 apc.stat_ctime 0 apc.ttl 0 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock 1 Also most php files are under 20KB, thus, apc.max_file_size = 1M is not the cause. I have also tried using 'apc_compile_file ' to force some files into opcode cache with no luck. I have also re-installed APC with Debugging enabled, but nothing shows in the error_log I have also tried setting mmap_file_mask to /dev/zero and /tmp/apc.xxxxxx, i have also set /tmp permissions to 777 to no avail Any clue anyone. Update: I have tried following things and none cause APC file cache to populate 1. set apc.enable_cli = 1 AND run a script from cli 2. Set apc.max_file_size = 5M (just in case) 3. switched php handler from dso to FastCGI in WHM (then switched it back to dso as it did not solve the problem) 4. Even tried restarting the container

    Read the article

  • Is there any open source DVD duplication software like Nero?

    - by johnny
    Nero Essentials isn't working right and I wondered if there was anything open source that I could use. I need to duplicate a DVD that I have authored - not a data disc a "real" dvd (with vob files, etc.) CDBurnerXP did not have this. Or, if I create an .iso is that the same thing when I burn it back to my duplicate? Thanks.

    Read the article

  • weird SSH connection timed out

    - by bran
    This problem started when I tried to login to my brand spaning new VPS server. I remember that in my first SSH try on the server I actually got prompt for password several times which would mean that there is no port blocking problem from my isp. Since the password did'nt work for me (for some reason). I had a lot of authentication failure. After that attempting to log in to the server just timed out. I did the same at mediatemple (which used to work before with sftp) and put in wrong password and now trying to ssh (or even SFTP) gives me timeout error. So some kind of security feature is preventing me from trying too many times to log in, either from my side or from the server side. Any idea what it could be? TRaceroute and ping works on the ips. I am using a zyxel wimax modem (max-206m1r - if that's relevent) c:\Program Files (x86)\OpenSSH\bin>ssh.exe [email protected] ssh: connect to host 109.169.7.136 port 22: Connection timed out c:\Program Files (x86)\OpenSSH\bin>ssh.exe [email protected] ssh: connect to host 109.169.7.131 port 22: Connection timed out c:\Program Files (x86)\OpenSSH\bin>ssh.exe [email protected] ssh: connect to host 87.117.249.227 port 22: Connection timed out c:\Program Files (x86)\OpenSSH\bin>ssh.exe [email protected] -vv OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug2: ssh_connect: needpriv 0 debug1: Connecting to 87.117.249.227 [87.117.249.227] port 22. debug1: connect to address 87.117.249.227 port 22: Connection timed out ssh: connect to host 87.117.249.227 port 22: Connection timed out c:\Program Files (x86)\OpenSSH\bin>ssh.exe s122797.gridserver.com Could not create directory '/home/pavs/.ssh'. The authenticity of host 's122797.gridserver.com (205.186.175.110)' can't be est ablished. RSA key fingerprint is 33:24:1e:38:bc:fd:75:02:81:d8:39:42:16:f6:f6:ff. Are you sure you want to continue connecting (yes/no)? yes Failed to add the host to the list of known hosts (/home/pavs/.ssh/known_hosts). Password: Password: Password: [email protected]'s password: Permission denied, please try again. [email protected]'s password: Permission denied, please try again. [email protected]'s password: Received disconnect from 205.186.175.110: 2: Too many authentication failures fo r pavs c:\Program Files (x86)\OpenSSH\bin>ssh.exe s122797.gridserver.com ssh: connect to host s122797.gridserver.com port 22: Connection timed out c:\Program Files (x86)\OpenSSH\bin>ssh.exe s122797.gridserver.com ssh: connect to host s122797.gridserver.com port 22: Connection timed out

    Read the article

  • .NET: will Random.Random operate differently inside a static method

    - by Craig Johnston
    I am having difficulty with the following code which is inside a static method of a non-static class. int iRand; int rand; rand = new Random((int)DateTime.Now.Ticks); iRand = rand.Next(50000); The iRand number, along with some other values, are being inserted into a new row of an Access MDB table via OLEDB. The iRand number is being inserted into a field that is part of the primary key, and the insert attempt is throwing the following exception even though the iRand number is supposed to be random: System.Data.OleDb.OleDbException: The changes you requested to the table were not successful because they would create duplicate values in the index, primary key, or relationship. Change the data in the field or fields that contain duplicate data, remove the index, or redefine the index to permit duplicate entries and try again. Could the fact the method is static be making the iRand number stay the same, for some reason?

    Read the article

  • Using jQuery copy plugin from CSS Tricks

    - by ftntravis
    I'm trying to use this plugin that CSS Tricks suggests. http://css-tricks.com/snippets/jquery/duplicate-plugin/ Shouldn't the following allow me to click a button and create a copy? $.fn.duplicate = function(count, cloneEvents) { var tmp = []; for ( var i = 0; i < count; i++ ) { $.merge( tmp, this.clone( cloneEvents ).get() ); } return this.pushStack( tmp ); }; $('.copy').click(function() { $('#form li').duplicate(5).appendTo('#form'); }; It's not working when I click it :(

    Read the article

  • How to find multiples of the same integer in an arraylist?

    - by Dan
    Hi My problem is as follows. I have an arraylist of integers. The arraylist contains 5 ints e.g[5,5,3,3,9] or perhaps [2,2,2,2,7]. Many of the arraylists have duplicate values and i'm unsure how to count how many of each of the values exist. The problem is how to find the duplicate values in the arraylist and count how many of that particular duplicate there are. In the first example [5,5,3,3,9] there are 2 5's and 2 3's. The second example of [2,2,2,2,7] would be only 4 2's. The resulting information i wish to find is if there are any duplicates how many of them there are and what specific integer has been duplicated. I'm not too sure how to do this in java. Any help would be much appreciated. Thanks.

    Read the article

  • Grails: Duplicates & unique constraint validation

    - by rukoche
    OK here is stripped down version of what I have in my app Artist domain: class Artist { String name Date lastMined def artistService static transients = ['artistService'] static hasMany = [events: Event] static constraints = { name(unique: true) lastMined(nullable: true) } def mine() { artistService.mine(this) } } Event domain: class Event { String name String details String country String town String place String url String date static belongsTo = [Artist] static hasMany = [artists: Artist] static constraints = { name(unique: true) url(unique: true) } } ArtistService: class ArtistService { def results = [ [ name:"name", details:"details", country:"country", town:"town", place:"place", url:"url", date:"date" ] ] def mine(Artist artist) { results << results[0] // now we have a duplicate results.each { def event = new Event(it) if (event.validate()) { if (artist.events.find{ it.name == event.name }) { log.info "grrr! valid duplicate name: ${event.name}" } artist.addToEvents(event) } } artist.lastMined = new Date() if (artist.events) { artist.save(flush: true) } } } In theory event.validate() should return false and event will not be added to artist, but it doesn't.. which results in DB exception on artist.save() Although I noticed that if duplicate event is persisted first everything works as intended. Is it bug or feature? :P

    Read the article

  • Faster s3 bucket duplication

    - by Sean McCleary
    I have been trying to find a better command line tool for duplicating buckets than s3cmd. s3cmd can duplicate buckets without having to download and upload each file. The command I normally run to duplicate buckets using s3cmd is: s3cmd cp -r --acl-public s3://bucket1 s3://bucket2 This works, but it is very slow as copies each file via the API one at a time. If s3cmd could run in parallel mode, I'd be very happy. Are there other options available as a command line tools or code that people use to duplicate buckets that are faster than s3cmd?

    Read the article

  • Hosting a subversion working copy in an remote WebDAV folder

    - by Daniel Baulig
    This might be a bit awkward, but I'll try to explain what I am trying to achieve and what problems I encountered. First of all: whats this about? I am currently trying to set up a distributed working enviroment for developing a web page. My plan was to setup a SVN repository for version control, a live server where the actual live page ist hosted and a development server where I can work on the page. To ease things I intended to not have a local copy of the project on my disk, but to actually work directy on the files, that the development server hosts. For that I setup a WebDAV directory, under devserver.com/workspace, that actually mapped to files served under devserver.com/. So I could connect to devserver.com/workspace, change something and view the results live at devserver.com/. So far this worked perfectly. The next step was to create a SVN repository that would take care of my version control. I intended to be able to checkin to the reposiroty from my development server and at any time, with a small shell script, deploy any revision from the svn to the live server by checking out a copy of the revision into the live server directories. The second part, checking out into the live server, also worked perfectly. The first part though is where problems arose: My workstation is a Windows 7 machine. I connected to the WebDAV share using Windows built-in WebDAV support, which worked quite well. I can create, move, delete, edit, whatever files on my WebDAV share from my Windows machine perfectly. The next step was to checkout a working copy from the SVN (actually hosted at devserver.com/subversion/) into the WebDAV share. In the first try I used the Eclipse plugin subversive. The actual checkout worked fine and I can update and commit stuff to the repository, however, I cannot add any files to the ignore list. It always brings me an error. So I tried the same thing with a complete fresh repository using TortoiseSVN - and again it failed with the same errors. Here is what it says when trying to add files to svnignore: Some of selected resources were not added to ignore. svn: Cannot rename file '\\devserver.com@SSL\DavWWWRoot\workspace\.svn\tmp\dir-props.66fd8936-2701-0010-bb76-472f0b56a5d1.tmp' to '\\devserver.com@SSL\DavWWWRoot\workspace\.svn\tmp\dir-props' This is what apache2 tells me, when I try to add a file to svnignore: [Sun Mar 07 03:54:19 2010] [error] [client xxx.xxx.xxx.xxx] Negotiation: discovered file(s) matching request: /var/www/devserver.com/.svn/tmp/dir-props (None could be negotiated). [Sun Mar 07 03:54:31 2010] [error] [client xxx.xxx.xxx.xxx] (20)Not a directory: The URL contains extraneous path components. The resource could not be identified. [400, #0] Actually both messages are repeated several times. The first one occurs first and is repeated about 5 times and the second comes there after and is repeated propably more than 20 times. If I create a regular file, delete, rename or modify it none of those messages appear in my error.log While writing this question now I was able to add fils to svnignore using TortoiseSVN. However, after that, Eclipse would not let me commit anymore. The error that used to pop up when adding files to svnignore now also shows up while commiting. While searching the web I found some people having this same message appearing because they had files only different in upper- / lower-case naming. I checked my repository and did not find such files. I also read somewhere about people having troubles with WebDAV and file locking, because WebDAV's file locking capabilities seem to be very limited. At some stage I got errors telling me my repository was locked and thus the operations could not be completed. This error though did not appear anymore, since I setup a completely fresh repository and working copy. I would really appreciate any help anyone can provide me in fixing this problem! If there are any more questions feel free to ask. I know this is a somewhat unusual setup. Best regards, Daniel

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >