Search Results

Search found 9690 results on 388 pages for 'unc share'.

Page 118/388 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • ssis package from SQL agent failed

    - by Pramodtech
    I have simple package which reads data from csv file and loads into SQL table. File is located on another server and it is shared. I use UNC path in package. package is scheduled using sql agent job. Job worked fine for 1 week and suddenly started giving error "The file name "\\124.0.48.173\basel2\Commercial\Input\ACBS_GSU.csv" specified in the connection was not valid. End Error Error: 2010-04-20 16:15:07.19 Code: 0xC0202070 Source: ACBS_GSU Connection manager "CSV file conection" Description: Connection "CSV file conection" failed validation." Any help will be appreciated.

    Read the article

  • Console-App to get all open files for processes

    - by t.kehl
    Hi I am searching for a console-app (where I can pipe the output to a txt-file) which gives me a list of all current processes and the files which each process has open. The tool should also work when the user doesn't has administrativ-privilegues and it should also give file-path which are located on the network (UNC and absolute/mappings). Is there something like this which I can call from another tool and get the information? I am on a windows system. I have a open filename and need now to get the whole path for the file

    Read the article

  • How do I save an altered image in matlab?

    - by ef-i-blinky
    So I am using the code located here: http://wwwx.cs.unc.edu/~sjguy/CompVis/Features/BlobDetect.m and I was wondering how to save the final blob detected image. The image that I am doing the blob detection on gets shown and then he manually draws the lines on the image here: Xbar = cx1+X.*cos(alpha)+Y.*sin(alpha); Ybar = cy1+Y.*cos(alpha)-X.*sin(alpha); line(Xbar', Ybar', 'Color', color, 'LineWidth', ln_wid); I then want to save this image using something like imwrite. I have been reading around and it seems that no one really has an answer to to this problem. Thanks for any help you can give me, Josh

    Read the article

  • strange SQL Server attach database error

    - by George2
    Hello everyone, I am using SQL Server 2008 Enterprise with VSTS 2008, and I am developing a simple web application using ASP.Net and Forms Authentication. When I am using the configuration tool/menu of VSTS of my ASP.Net project (I want to use this tool to manually add some Forms authentication users), I met with the following error (SqlException), Trying to attach file D:\Projects\MyTest\App_Data\aspnetdb.mdf to automatically named database failed. It may be caused by existing the same name database, or may be caused by specified file can not be opened or caused by the specified file exists in UNC share. In my computer, there is no aspnetdb.mdf under dir D:\Projects\MyTest\App_Data, and I have used aspnet_regsql to generate database successfully before I run the configuration tool. Why there is such error? How to fix it? thanks in advance, George

    Read the article

  • An attempt to attach an auto-named database for file failed in Vb.Net

    - by user2454135
    I am Trying to connect database for first time , and I am getting this error : An attempt to attach an auto-named database for file VBTestDB.mdf failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share. and getting error on myconnect.Open() Heres my code : Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim myconnect As New SqlClient.SqlConnection myconnect.ConnectionString = "Data Source=.\SQLEXPRESS;AttachDbFilename=VBTestDB.mdf;Integrated Security=True;User Instance=True;" Dim mycommand As SqlClient.SqlCommand = New SqlClient.SqlCommand() mycommand.Connection = myconnect mycommand.CommandText = "INSERT INTO Card (CardNo,Name) VALUES (@cardno,@name)" myconnect.Open() Try mycommand.Parameters.Add("@cardno", SqlDbType.Int).Value = TextBox1.Text mycommand.Parameters.Add("@name", SqlDbType.NVarChar).Value = TextBox2.Text mycommand.ExecuteNonQuery() MsgBox("Success") Catch ex As System.Data.SqlClient.SqlException MsgBox(ex.Message) End Try myconnect.Close() End Sub

    Read the article

  • Program Terminates On File Move

    - by Merus
    I have a .Net program that, as one of its functions, takes a file from a user-specified directory and puts it in another, special, directory, specified via UNC (which may or may not be local). I don't open any of these files in this part of the code. There's this bizarre bug I'm having where, on a Windows Server 2003 SP2 VM, this program randomly does a hard abort while doing the move to a local folder. It just terminates. No exception, no logging, and it doesn't appear to happen at any particular moment. I can't reproduce this problem on my development machine, and it only appears to happen during the copy of a particular kind of file that's about a megabyte or so. There are other formats copied to different directories using very similar code, all smaller, and they work fine. Why would a Windows .Net program do a hard abort like this? What can I do to fix it?

    Read the article

  • AWStats is processing log files but does not display them

    - by Wouter
    I've setup AWStats on my VPS to get some more insight into the traffic coming to my site. As instructed I ran a manual build/update which ran fine: sudo -u www-data ./awstats.pl -config=xxxx.com Create/Update database for config "/etc/awstats/awstats.xxxx.com.conf" by AWStats version 6.9 (build 1.925) From data in log file "/usr/share/doc/awstats/examples/logresolvemerge.pl /var/www/xxxx.com/logs/*-access.log |"... Phase 1 : First bypass old records, searching new record... Searching new records from beginning of log file... Phase 2 : Now process new records (Flush history on disk after 20000 hosts)... Warning: awstats has detected that some hosts names were already resolved in your logfile /usr/share/doc/awstats/examples/logresolvemerge.pl /var/www/xxxx.com/logs/*-access.log |. If DNS lookup was already made by the logger (web server), you should change your setup DNSLookup=1 into DNSLookup=0 to increase awstats speed. Jumped lines in file: 0 Parsed lines in file: 814 Found 0 dropped records, Found 0 corrupted records, Found 0 old records, Found 814 new qualified records. It also produced the file in the DatDir: /var/lib/awstats/awstats052010.xxxx.com.txt which contains what I would expect. BUT when I visit: xxxx.com/awstats/awstats.pl it tells me Last Update: Never updated (See 'Build/Update' on awstats_setup.html page) and the rest of the page is blank. I'm pretty sure I set it up correctly but now I cannot figure out why this is happening. Hopefully someone smarter then me can help me. Thank you in advanced.

    Read the article

  • Windows telling me, the local security authority is internally inconsistent upon mounting a network drive

    - by acme
    Since ages I've mounted a network share (via samba to a Linux machine) in Windows 7 to access it via drive letter. This worked flawlessly so far. Until now. Suddenly I couldn't access the drive anymore. Windows was telling me the network name (I didn't remember the exact term) was already in use. So I disconnected and tried to connect again: net use Y: \\10.10.10.208\work After a long time I get a message saying "The Local Security Authority (LSA) database contains an internal inconsistency" A restart didn't help. The mapped share is accessible (works on other machines in the same network), so obviously something strange is going on on my machine. Can anyone tell me how I can fix this inconsistency? Update: All machines that have saved the login information refuse with this error. So it must be something with the authorization. When I use net use Y: \\10.10.10.208\work /user:raphael it prompts me for the password and then returns that error message.

    Read the article

  • QNTC and Windows Server 2008 R2

    - by Ben
    I am having a really hard time getting an iSeries (AS/400) machine talking to my new Windows Server 2008 R2 box using the QNTC file system on the iSeries. I had similar problems getting it to initially talk to a Windows Server 2003 machine, but enabling the local Guest account on the 2003 box solved that one. No such luck with the new 2008 box. When I do a WRKLNK /QNTC/SVR01 on the iSeries (which should show share listings, and does on any 2003 boxes) all I get is (Cannot find object to match specified name.). I know the iSeries likes the same username and password on the remote server, but unfortunately for us this is not the case. Anyhow, it does currently work with different username/password combinations on a 2003 box. To try and get the wretched things talking, I have made the 2008 server pretty open but the iSeries will not see shares on it. I have enabled the local Guest account, turned Windows firewall off, set the share permissions so Everyone has full control but to no avail. I read something on the internet about the iSeries only being able to handle NTLM authentication (and I understand by default that Server 2008 R2 only uses NTLMv2 and has NTLM disabled), so I made a special group policy for the server and tweaked all Group Policy settings under Computer Configuration\Policies\Windows Settings\Security Settings\Local Policies\Security Options but the iSeries STILL won't see it. We have a team of programmers who do all the system administration of the iSeries, but they are stumped for ideas on their side, and I'm stumped for ideas on my side. This is driving me crazy now, and if anybody has managed to get an iSeries to talk to Windows Server 2008 R2 using QNTC I would be very appreciative of any suggestions, be it on the Windows side, iSeries settings or even IBM PTF's that might patch anything. The iSeries is running V5R4 and I have *SECOFR privileges on it, if it helps. One final (most important!) note - The programmers think it's my system being tricky, and I think it's theirs - please prove me right :)

    Read the article

  • WDS's MDT DeploymentShare and REMINST replicated with DFS-R does not read WIM from local WDS

    - by mbrownnyc
    I've read several guides on using DFS-R with WDS and MDT to replicate REMINST and DeploymentShare, and I have a particularly strange problem. On the receiving server, after configuring WDS and mounting the DeploymentShare into MDT's DeploymentWorkbench, I also performed the following: 1) in .\Control\Bootstrap.ini, changed DeployRoot to \%wdsserver%\DeploymentShare$ 2) Changed the UNC path at the root of the MDT Deployment Share in the DeploymentWorkbench to match that of the current server. 3) In Unattend.xml files located: .\Control**, modified the following value to match the current server: <cpi:offlineImage catelog://HOST/ I am able to boot and grab the LiteTouch PE image off the local WDS TFTP server, but the WIM files, the scripts, everything else is being pulled off the WDS server at the remote site (the original WDS server that was the source of the files within the DFS-R replicated folder). What do I do in order to solve this problem? I've grepped all the files below the DeploymentShare to look for instances of the hostname of the WDS server at the remote site (the source of the files), but I found none. Here are the guides I referred to: http://technet.microsoft.com/en-us/library/cc771324%28WS.10%29.aspx http://blogs.technet.com/b/askds/archive/2009/12/16/wds-and-dfsr-love-at-first-sync.aspx http://oasysadmin.com/2011/11/03/copying-moving-and-replicating-the-mdt-2010-deployment-share/

    Read the article

  • Apache2 configuration error: "<VirtualHost> was not closed" error.

    - by Chris
    So I've already checked through my config file and I really can't see an instance where any tag hasn't been properly closed...but I keep getting this configuration error...Would you mind taking a look through the error and the config file below? Any assistance would be greatly appreciated. FYI, I've already googled the life out of the error and looked through the log extensively, I really can't find anything. Error: apache2: Syntax error on line 236 of /etc/apache2/apache2.conf: syntax error on line 1 of /etc/apache2/sites-enabled/000-default: /etc/apache2/sites-enabled/000-default:1: was not closed. Line 236 of apache2.conf: Include the virtual host configurations: Include /etc/apache2/sites-enabled/ Contents of 000-default: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <VirtualHost *:443> SetEnvIf Request_URI "^/u" dontlog ErrorLog /var/log/apache2/error.log Loglevel warn SSLEngine On SSLCertificateFile /etc/apache2/ssl/apache.pem ProxyRequests Off <Proxy *> AuthUserFile /srv/ajaxterm/.htpasswd AuthName EnterPassword AuthType Basic require valid-user Order Deny,allow Allow from all </Proxy> ProxyPass / http://localhost:8022/ ProxyPassReverse / http://localhost:8022/ </VirtualHost>

    Read the article

  • How to get robocopy running in powershell?

    - by Mark Allison
    Hi, I'm trying to use robocopy inside powershell to mirror some directories on my home machines. Here's my script: param ($configFile) $config = Import-Csv $configFile $what = "/COPYALL /B /SEC/ /MIR" $options = "/R:0 /W:0 /NFL /NDL" $logDir = "C:\Backup\" foreach ($line in $config) { $source = $($line.SourceFolder) $dest = $($line.DestFolder) $logfile = $logDIr $logfile += Split-Path $dest -Leaf $logfile += ".log" robocopy "$source $dest $what $options /LOG:MyLogfile.txt" } The script takes in a csv file with a list of source and destination directories. When I run the script I get these errors: ------------------------------------------------------------------------------- ROBOCOPY :: Robust File Copy for Windows ------------------------------------------------------------------------------- Started : Sat Apr 03 21:26:57 2010 Source : P:\ C:\Backup\Photos \COPYALL \B \SEC\ \MIR \R:0 \W:0 \NFL \NDL \LOG:MyLogfile.txt\ Dest - Files : *.* Options : *.* /COPY:DAT /R:1000000 /W:30 ------------------------------------------------------------------------------ ERROR : No Destination Directory Specified. Simple Usage :: ROBOCOPY source destination /MIR source :: Source Directory (drive:\path or \\server\share\path). destination :: Destination Dir (drive:\path or \\server\share\path). /MIR :: Mirror a complete directory tree. For more usage information run ROBOCOPY /? **** /MIR can DELETE files as well as copy them ! Any idea what I need to do to fix? Thanks, Mark.

    Read the article

  • PowerShell: New-PSDrive error handling

    - by mazebuhu
    Hello, I have a script where I mount with the command "New-PSDrive" a network drive. Now, since the script is running as a "cronjob" on a server I want to have some error detection. If for any reason the command New-PSDrive fails the script should stop executing and notify that something went wrong. I have the following code: Try { New-PSDrive -Name A -PSProvider FileSystem -Root \\server\share } Catch { ... handle error case ... } ... other code ... For testing reasons I specified a wrong server name and I get the following error "New-PSDrive : Drive root "\wrongserver\share" does not exist or it's not a folder". Which is OK since the server does not exists. But the script does not go into the Catch clause and stop. It happily continues to run and ends up in a mess since no drive is mounted :-) So my question, why? Is there any difference in Exception handling in PowerShell? I should also note that I'm a noob in PowerShell scripting. Bye, Martin

    Read the article

  • PHP-FPM and APC for shared hosting?

    - by Tiffany Walker
    We are looking into finding a way to get APC to only create one cache per account / site. This can be done with Fastcgi (last update 2006…) but with Fastcgid APC will have to create multiple caches for multiple processes run by the same account. To get around this problem, we have been looking into PHP-FPM PHP process manager allows multiple PHP processes to share a single APC cache. But from what I have read (I hope I'm wrong) , even if you create a pool per process, all sites accross all pools will share the same APC cache. This brings us back to the same problem as with shared Memcached: it's not secure ! On php-fpm's site I read that you can chroot php-fpm pools and define a specific UID and GID per pool… if this is the case then shouldn't APC have to use this user and not have access to other pools cache ? An article here (in 2011) suggests that you would need to run one process per pool creating multiple launchers on different ports and different config files with one pool per config file : http://groups.drupal.org/node/198168 Is this still neceessary ? If so what would be the impact of running say 800 processes of php-fpm ? Would it be mainly memory ? If so how can I work out what the memory impact would be ? I guess that it would be better to run 800 times php-fpm then to have accounts creating multiple APC caches for a single site ? If on average an account creates a 50MB cache and creates 3 caches per account that makes 150Mb per account which makes 120GB… However if each account uses on average only 50Mb that would make 40GB We will have at least 128GB of ram on our next server so 40GB is acceptable if running 800 x PHP-FPM does not create an overhead of more than 20GB ! What do you think is PHP-FPM the best way to go to provide secure APC cache on shared hosting with a server that has a decent amount of memory ? Or should I be looking at another system ? Thanks !

    Read the article

  • Replacing DropBox with: Amazon S3 + SSL + GPG/TrueCrypt + Mounting on OSX ??

    - by Matt Rogish
    So, right now we're using DropBox to share various data files around between approximately 10 Mac OS X systems. However, we already have an S3 account and everyone on the lowest DropBox plan of $10/mo seems too expensive. We'd like to avoid any kind of local storage (share a disk on a desktop or something) since we're a geographically distributed team). So, I am contemplating something that would allow us to replace DropBox with our own home-grown solution. We are all fairly technical people and/or smart enough to follow some steps, so if it's not as "user friendly" as DropBox we're all comfortable with that. There are plenty of docs out there that have bits and pieces of what I want but some of the tools don't seem to fit the requirements: Transport security via SSL to the bucket Encryption of bucket contents Bi-directional syncing Most of the scripts I can find on the internet use "duplicity" which appears to fail #1 (it doesn't look like duplicity supports SSL to S3 - the docs don't state but the protocol looks plain old http http://www.nongnu.org/duplicity/duplicity.1.html#sect6 ) Many scripts use gpg to encrypt files. This seems like it could work, however I have to make sure that each OSX client is able to use the same key to encrypt and decrypt files (key management is left to me to manage). FTP and other client-based apps don't seem to support this at all. Finally, most of the scripts use one-way replication, e.g. using Amazon S3 as a simple backup store. As we'd be using Amazon S3 as the "repository" they fail this one. Whew. So, I'd love a single tool that does this but after an exhaustive search I don't think one exists. In my mind, the magical tool would be some combination of TrueCrypt and rsync. I'd be happy just knowing which tools out there can fulfill my 3 requirements, after that I can stitch together the rest. Any thoughts? THANKS!

    Read the article

  • Centos CMake Does Not Install Using gcc 4.7.2

    - by Devin Dixon
    A similar problem has been reported here with no solution:https://www.centos.org/modules/newbb/print.php?form=1&topic_id=42696&forum=56&order=ASC&start=0 I've added and upgraded gcc to centos cd /etc/yum.repos.d wget http://people.centos.org/tru/devtools-1.1/devtools-1.1.repo yum --enablerepo=testing-1.1-devtools-6 install devtoolset-1.1-gcc devtoolset-1.1-gcc-c++ scl enable devtoolset-1.1 bash The result is this for my gcc [root@hhvm-build-centos cmake-2.8.11.1]# gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/lto-wrapper Target: x86_64-redhat-linux Configured with: ../configure --prefix=/opt/centos/devtoolset-1.1/root/usr --mandir=/opt/centos/devtoolset-1.1/root/usr/share/man --infodir=/opt/centos/devtoolset-1.1/root/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --disable-build-with-cxx --disable-build-poststage1-with-cxx --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --enable-languages=c,c++,fortran,lto --enable-plugin --with-linker-hash-style=gnu --enable-initfini-array --disable-libgcj --with-ppl --with-cloog --with-mpc=/home/centos/rpm/BUILD/gcc-4.7.2-20121015/obj-x86_64-redhat-linux/mpc-install --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 4.7.2 20121015 (Red Hat 4.7.2-5) (GCC) And I tried to then install cmake through http://www.cmake.org/cmake/resources/software.html#latest But I keep running into this error: Linking CXX executable ../bin/ccmake /opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/ld: CMakeFiles/ccmake.dir/CursesDialog/cmCursesMainForm.cxx.o: undefined reference to symbol 'keypad' /opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/ld: note: 'keypad' is defined in DSO /lib64/libtinfo.so.5 so try adding it to the linker command line /lib64/libtinfo.so.5: could not read symbols: Invalid operation collect2: error: ld returned 1 exit status gmake[2]: *** [bin/ccmake] Error 1 gmake[1]: *** [Source/CMakeFiles/ccmake.dir/all] Error 2 gmake: *** [all] Error 2 The problem seems to come from the new gcc installed because it works with the default install. Is there a solution to this problem?

    Read the article

  • NFS "Permission Denied" getting cached on NetApp Filer

    - by Christopher Karel
    We have a bunch of Linux boxes mounting NFS shares off a NetApp filer. From time to time, I will flub some part of the export configuration. Typo on one of the allowed hosts, incorrect IP address, etc, etc. No worries, this is usually done on a test system, or with brand new exports that aren't yet in production. However, I've found that once I've been denied permission to mount something from a Linux machine, the failure gets cached for as long as a day. I will correct the problem that was blocking the mount, re-export on the NetApp, and still not be able to mount the share. I'm pretty sure this caching is done at the NetApp side. It normally ages out after a day or so, but it really sucks having to wait until tomorrow to mount a share. I've tried exportfs -f on the NetApp, as well as dns flush. (I found both suggestions via Google) However, neither one works. I would sell my soul if someone could help out with a command/pagan ritual that would clear up this cache issue. --Christopher Karel

    Read the article

  • Set JENKINS_HOME in Tomcat7?

    - by C. Ross
    I'm trying to set up Jenkins in Tomcat7 on Ubuntu. I installed Tomcat7 and deployed jenkins.war, and I now see the Jenkins home page at http://myhost:8080/jenkins, but it's attempting to create the Jenkins directory at /usr/share/tomcat7/.jenkins, which it can't for security reasons. I've already created /srv/jenkins and given the tomcat7 group permissions, and want to set JENKINS_HOME to that path. I've tried adding it to the tomcat configuration in /etc/tomcat7/server.xml: <GlobalNamingResources> <Environment name="JENKINS_HOME" value="/srv/jenkins" type="java.lang.String" override="false"/> <!-- Default settings --> And I've also tried adding it to the automatically created context file in ROOT/META-INF/context.xml (there is no $CATALINA_HOME/conf as far as I can tell). <Context path="/" antiResourceLocking="false" > <Environment name="JENKINS_HOME" value="/srv/jenkins/" type="java.lang.String"/> </Context> But even after restarting tomcat7 I still get the same result (trying to use /usr/share/tomcat7/.jenkins). Where do I need to set the environment variable for JENKINS_HOME in Tomcat7?

    Read the article

  • Which httpd.conf and php.ini files does plesk use

    - by Saif Bechan
    If i disable the include_path in /etc/php.ini, i can still see that there is an include path loaded from somewhere. I want to know where this file is. If i use phpinfo(); on my domain i get that there is a include path /usr/share/pear:/usr/share/php, even though in the php.ini i used: ;inlucde_path=".:" In the section for aditional php.ini files this is the list: /etc/php.d/curl.ini, /etc/php.d/dbase.ini, /etc/php.d/dom.ini, /etc/php.d/gd.ini, /etc/php.d/imap.ini, /etc/php.d/json.ini, /etc/php.d/mbstring.ini, /etc/php.d/mysql.ini, /etc/php.d/mysqli.ini, /etc/php.d/pdo.ini, /etc/php.d/pdo_mysql.ini, /etc/php.d/pdo_sqlite.ini, /etc/php.d/wddx.ini, /etc/php.d/xmlreader.ini, /etc/php.d/xmlwriter.ini, /etc/php.d/xsl.ini, /etc/php.d/zip.ini I have checked all these files but i can nowhere find a reference to pear. This said the /etc/httpd/conf/httpd.conf file is questioning to me too. If i check this file i can see the following values are defined. DocumentRoot "/var/www/html" <Directory /> Order Deny,Allow Deny from all Options None AllowOverride None </Directory> <Directory "/var/www/html"> Options None AllowOverride None Order allow,deny Allow from all </Directory> The strange thing is that i don't even use this directory, so where does plesk get his httpd.conf file from. I want to check if everything is ok, because my vhost.conf is not working. I don't know if i should change these values or not, and where i can find the values plesk uses. i hope someone can help me with this.

    Read the article

  • Occasional "Could not load file or assembly" in asmx WebService on IIS and DFS

    - by user8804
    We have a handfull of ASMX web service hosted on two identical Windows Server 2003 boxes. The virtual directory for the web services is loaded in a DFS share, both servers point to the same share. We have a load balancer between the internet and the two web servers. At a seemingly random interval (right now about twice per week) when a user tries to access a method on the web service, IIS returns the error: "Could not load file or assembly" for one of the assemblies used in the method call, and will continue reporting it each time the method is called until the app pool is recycled. We haven't found any distinguishable pattern to the problem. This is what I know: the missing assembly varies (but it's always a home-brew assembly) the Web Service method that fails varies there is no noticeable pattern to the times or intervals where the problem appears there are no admin users accessing the servers when the problem appears the failing method will work correctly on one server and fail on the other, even though both point to the same bin folder the problem can always be corrected by recycling the app pool and making no other changes I have enabled the Assembly Binder Log, and know that the binder is looking in the correct location for the file. Our assemblies are compiled for .Net 3.5.

    Read the article

  • How to get html/css/jpg pages server by both apache & tomcat with mod_jk

    - by user53864
    I've apache2 and tomcat6 both running on port 80 with mod_jk setup on ubutnu servers. I had to setup an error document 503 ErrorDocument 503 /maintenance.html in the apache configuration and somehow I managed to get it work and the error page is server by apache when tomcat is stopped. Developers created a good looking error page(an html page which calls css and jpg) and I'm asked to get this page served by apache when tomcat is down. When I tried with JkUnMount /*.css in the virtual hosting, the actual tomcat jsp pages didn't work properly(lost the format) as the tomcat applications uses jsp, css, js, jpg and so on. I'm trying if it is possible to get .css and .jpg served by both apache and tomcat so that when the tomcat is down I'll get css and jpg serverd by apache and the proper error document is served. Anyone has any technique? Here is my apache2 configuration: vim /etc/apache2/apache2.conf Alias / /var/www/ ErrorDocument 503 /maintenance.html ErrorDocument 404 /maintenance.html JkMount / myworker JkMount /* myworker JkMount /*.jsp myworker JkUnMount /*.html myworker <VirtualHost *:80> ServerName station1.mydomain.com DocumentRoot /usr/share/tomcat/webapps/myapps1 JkMount /* myworker JkUnMount /*.html myworker </VirtualHost> <VirtualHost *:80> ServerName station2.mydomain.com DocumentRoot /usr/share/tomcat/webapps/myapps2 JkMount /* myworker JkMount /*.html myworker </VirtualHost>

    Read the article

  • [Zend Framework - Ubuntu10.04- Lamp- First Project] i get 500 error on http://localhost/zftutorial/p

    - by meyosef
    Hi I new in Zend and Lamp, my software: Zend Framework, Ubuntu10.04,Lamp. I made my first Zend Project with Zend tool (according this tutorial http://akrabat.com/wp-content/uploads/Getting-Started-with-Zend-Framework.pdf) But when i go to http://localhost/zftutorial/public i get 500 error. My $ dir -l of zftutorial: drwxr-xr-x 6 root root 4096 2010-06-01 23:54 application drwxr-xr-x 2 root root 4096 2010-06-01 23:54 docs drwxr-xr-x 3 root root 4096 2010-06-02 00:23 library drwxr-xr-x 3 root root 4096 2010-06-02 00:00 nbproject drwxr-xr-x 2 root root 4096 2010-06-01 23:54 public drwxr-xr-x 4 root root 4096 2010-06-01 23:54 tests my:/etc/apache2/sites-available/default <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Thanks,

    Read the article

  • Installing rpm module of Python for yum

    - by vito
    I've installed Python and yum from source (configure, make, make install), not using RPMs because that's leading to several other issues. So when I executed: # yum update ... I get the following error: Traceback (most recent call last): File "/usr/bin/yum", line 22, in <module> import yummain File "/usr/share/yum/yummain.py", line 22, in <module> import clientStuff File "/usr/share/yum/clientStuff.py", line 18, in <module> import rpm ImportError: No module named rpm Now because I've installed yum and python from source, do I need to install Python's rpm module from source, too? Because installing the rpm for this module lead to the following error: # rpm -vih rpm-python-3.0.4-6x.i386.rpm warning: rpm-python-3.0.4-6x.i386.rpm: V3 DSA signature: NOKEY, key ID db42a60e error: Failed dependencies: python >= 1.5.2 is needed by rpm-python-3.0.4-6x.i386 libbz2.so.0 is needed by rpm-python-3.0.4-6x.i386 librpm.so.0 is needed by rpm-python-3.0.4-6x.i386 Suggested resolutions: /var/spool/up2datepython-2.3.4-14.7.el4.x86_64.rpm I tried searching for the source of this module, but I couldn't find it. Any help in installing this module is appreciated. Thanks for your time. Other info: # python -V Python 2.6.5

    Read the article

  • SpamAssassin 2010 Bug still active on my mailserver despite the offending rule being fixed - where t

    - by Ibrahim
    The SpamAssassin 2010 bug was supposed to be fixed not long after the bug became widely known, and indeed the offending rule in my /usr/share/spamassassin/72_active.cf has been updated. However, incoming messages are still being tagged by this eg: X-Spam-Status: No, score=3.188 tagged_above=-999 required=6.31 tests=[BAYES_50=0.001, FH_DATE_PAST_20XX=3.188, SPF_PASS=-0.001] Here is the relevant rule: ##{ FH_DATE_PAST_20XX header FH_DATE_PAST_20XX Date =~ /20[2-9][0-9]/ [if-unset: 2006] describe FH_DATE_PAST_20XX The date is grossly in the future. ##} FH_DATE_PAST_20XX I'm on spamassassin/3.2.5-2+lenny1.1~volatile1 on Debian Lenny, completely up to date. Any pointers on where to look to figure out what's going on? I don't know anything about SpamAssassin; someone else usually manages this but I'm free right now and am trying to figure out what the problem is because it's been annoying us for a while and we only just realized this bug was still affecting us. Update: I've lowered the score for the FH_DATE_PAST20XX rule to 0.1, both in /etc/spamassassin/local.cf and /usr/share/spamassassin/50_scores.cf and it's still giving 3.188 points for this rule. Any idea what's going on? This really has me stumped. Update 2: It seems that after restarting amavisd, it's been fixed. What's the difference between amavisd and spamd? It seems like both should not be running, or something.

    Read the article

  • SpamAssassin 2010 Bug still active on my mailserver despite the offending rule being fixed - where t

    - by Ibrahim
    The SpamAssassin 2010 bug was supposed to be fixed not long after the bug became widely known, and indeed the offending rule in my /usr/share/spamassassin/72_active.cf has been updated. However, incoming messages are still being tagged by this eg: X-Spam-Status: No, score=3.188 tagged_above=-999 required=6.31 tests=[BAYES_50=0.001, FH_DATE_PAST_20XX=3.188, SPF_PASS=-0.001] Here is the relevant rule: ##{ FH_DATE_PAST_20XX header FH_DATE_PAST_20XX Date =~ /20[2-9][0-9]/ [if-unset: 2006] describe FH_DATE_PAST_20XX The date is grossly in the future. ##} FH_DATE_PAST_20XX I'm on spamassassin/3.2.5-2+lenny1.1~volatile1 on Debian Lenny, completely up to date. Any pointers on where to look to figure out what's going on? I don't know anything about SpamAssassin; someone else usually manages this but I'm free right now and am trying to figure out what the problem is because it's been annoying us for a while and we only just realized this bug was still affecting us. Update: I've lowered the score for the FH_DATE_PAST20XX rule to 0.1, both in /etc/spamassassin/local.cf and /usr/share/spamassassin/50_scores.cf and it's still giving 3.188 points for this rule. Any idea what's going on? This really has me stumped. Update 2: It seems that after restarting amavisd, it's been fixed. What's the difference between amavisd and spamd? It seems like both should not be running, or something.

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >