Search Results

Search found 26555 results on 1063 pages for 'active directory explorer'.

Page 565/1063 | < Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >

  • Quartz compositions created in Snow Leopard (10.6) doesn't work in Leopard (10.5) despite testing in

    - by adib
    Hi I have a reasonably advanced (many patches and subpatches) quartz composition that was created in Snow Leopard but doesn't run well (many elements are not rendered) in Leopard. The composition tested OK via Quartz Composer's Test in Runtime option and works fine for both Leopard 32-bits and Leopard 64-bits (menu item "File | Test in Runtime | Leopard 32-bits". In an actual Leopard (32-bits) system, a lot of elements are not rendered in the quartz composition. Below are the log file excerpt when the composition is run in QuickTime Player under Leopard: QuickTime Player[134] *** <QCNodeManager | namespace = "com.apple.QuartzComposer" | 335 nodes>: Patch with name "/units to pixels" is missing QuickTime Player[134] *** Message from <QCPatch = 0x06D82880 "(null)">:Cannot create node of class "/units to pixels" and identifier "(null)" QuickTime Player[134] *** Message from <QCPatch = 0x06D7C130 "(null)">:Cannot create node of class "/resize image to target" and identifier "(null)" QuickTime Player[134] *** Message from <QCPatch = 0x06D7C130 "(null)">:Cannot create connection from ["outputValue" @ "Math_1"] to ["Target_Pixels" @ "Patch_2"] The patch "units to pixels" is a system patch in Snow Leopard whereas the patch "resize image to target" is a custom virtual patch located in my home directory. It seems that we can cross out problems in which the composition is referencing a missing virtual patch. I have tested the composition under another user's account and it ran fine which shows that it already embeds the "resize image to target" virtual patch that is located in my home directory. I'm really puzzled why the composition passes the Leopard Runtime test but yet fail to run in an actual Leopard OS? Is there a post-processing step that I need to run to the composition file? Is there any way to make this patch more compatible with Leopard? Thanks in advance.

    Read the article

  • dell poweredge 860 always pxe boot

    - by Berto
    Hi, We bought a dell poweredge 860, and installed windows server 2008 std. So far so good. Now everytime the server reboots, it try's the pxe boot, and hangs on "cant find pxe bootable, press f1 to continue or f2 to setup". Now, i tried all the settings in the bios (it's the latest - A05), i have the nic active without pxe (i also tried with the nic off)... well, i tried everyting off and on the bios and it still goes to pxe booting. I have one sata drive, that is boot's after the f1. I would like to keep the system updated, but i can't because of the reboot, can someone point me to some direction? Can i give any more information of the server? If yes, what do you want? It has been a week on google trying to figure out this one. Thx

    Read the article

  • Bridging VirtualBox over OpenVPN TAP adapter on Windows

    - by Sean Edwards
    I'm trying to configure a virtual machine (VirtualBox guest running Backtrack 4) with a bridged adapter over a VPN connection. The VPN is is hosted by the cybersecurity club at my university, and connects to a sandboxed LAN designed for penetration testing against various servers that the club has built. My host (Windows 7 Ultimate) connects to the VPN fine and is assigned an IP through DHCP, but for some reason the VM can't do the same thing, and I'm not sure why. It's like OpenVPN is filtering out packets from the MAC address it doesn't recognize. I want the virtual machine to bridge over the VPN connection, because our IT office has very strict policies about what you can and can't do on the network. I want to be able to run active attacks (ARP spoofing, nmap, Nessus scans) in the sandbox environment without risking the traffic accidentally going over the university network and getting my internet access revoked. Bridging over the VPN connection and running all attacks from inside the VM would solve that problem. Any idea why the host can use this interface, but the VM can't?

    Read the article

  • Cannot run setups from a mapped network drive on Windows 7

    - by Dimitri C.
    I'm trying to run an application setup by double-clicking the setup.exe from within Windows Explorer. The file is located on a mapped network drive, and I'm using Windows 7. This results in the following error message: The specified path does not exist. Check the path, and then try again. The workaround I found is to copy the installer to the main hard drive (c:) and run it from there; however, this is rather inconvenient. The same action did work on Windows 2000, Windows XP and Windows Vista. I have the impression that the problem only occurs with installers, as everything seemed to work fine with regular exe's. Is there anyone who can explain this odd behavior?

    Read the article

  • Apache & SVN on Ubuntu - Post-commit hook fails silently, pre-commit hook "Permission Denied"

    - by Andy R
    I've been struggling for the past couple days to get post-commit email notifications working on my SVN server (running via HTTP with Apache2 on Ubuntu 9.10). SVN commits work fine, but for some reason the hooks are not being properly executed. Here are the configuration settings: - Users access the repo via HTTP with the apache dav_svn module (I created users/passwords via htpasswd in a dav_svn.passwd file). dav_svn.conf: <Location /svn/repos> DAV svn SVNPath /home/svn/repos AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user </Location> I created a post-commit hook file that writes a simple message to a file in the repository root: /home/svn/repos/hooks/post-commit: #!/bin/sh REPOS="$1" REV="$2" /bin/echo 'worked' > ${REPOS}/postcommit.log I set the entire repository to be owned by www-data (the apache user), and assigned 755 permissions to the post-commit script when I test the post-commit script using the www-data user in an empty environment, it works: sudo -u www-data env - /home/svn/repos/hooks/post-commit /home/svn/repos 7 But when I commit on a client machine, the commit is successful, but the post-commit script does not seem to be executed. I also tried running a simple script for the pre-commit hook, and I get an error, even with an empty pre-commit script: "Commit failed (details follow): Can't create null stdout for hook '/home/svn/repos/hooks/pre-commit': Permission denied" I did a few searches on Google for this error and I presume that this is an issue with the apache user (www-data) not having adequate permissions, specifically to execute /dev/null. I also read that the reason post-commit fails silently is because that it doesn't report with stdout. Anyway, I've also tried giving the apache user (www-data) ownership of the entire repository, and edited the apache virtualhost to allow operations on the server root, and I'm still getting permission denied /etc/apache2/sites-available/primarydomain.conf <Directory /> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> Any ideas/suggestions would be greatly appreciated! Thanks

    Read the article

  • Web browser being selective about the sites that it will visit.

    - by Andrew Doran
    I've been trying to help my father-in-law with this problem but haven't been able to get anywhere. Since the weekend the web browsers on his computer (Chrome and Internet Explorer on Windows XP) will only let him get to certain sites - for example, he is able to conduct his online banking but he cannot visit www.bbc.co.uk, www.amazon.co.uk or www.ancestry.com. There is another computer in the house that goes via the same router and this can connect to both, which suggests it is his machine. I tried running a tracert to www.bbc.co.uk and managed to get through, but the web browser hangs with a message that it is waiting for a response. I tried using the WinSockFix tool in case it was anything to do with a recent registry change but that didn't work either. He can't think of anything that he recently did on his machine to cause the problem. Can anyone help?

    Read the article

  • Windows 7: Can't see ISO file in C:\

    - by cbp
    I used DVD shrink to create an ISO file and saved it into C:\ The ISO file is visible with some programs but not with others. The file is not hidden as far as I am aware. But it cannot be seen by Windows Explorer, DVD Decrypter or a bunch of other programs. If I search for the file using Windows 7's Start Menu search tool, I can see the file and I can right click and select Properties. The Properties window appears OK, but if I try to change tabs on the property window, I receive an error message as though the file is not there. DVD Shrink can still open the file OK. I can also find the file using Agent Ransack (a file searching tool), but then I cannot open it. What gives?

    Read the article

  • maven-jar-plugin includes vs excludes

    - by Chris Williams
    I've got an existing pom file that includes a maven-jar-plugin section. It runs for the test-jar goal and is currently excluding a few directories: <excludes> <exclude>...</exclude> <exclude>...</exclude> <exclude>somedir/**</exclude> </excludes> I need to include a file in the somedir directory but leave out the rest of the files in the somedir directory. I've read that includes have precedence over excludes so I added something like the following (there was no includes section before): <includes> <include>somedir/somefile.xml</include> </includes> This ends up creating a jar file for test with only a few files in it (just the stuff in META-INF). The file that I included is not in the jar either. What I'd expect is a jar that is identical to the jar that was created before my includes change with the one additional file. What am I missing here?

    Read the article

  • SQL log shipping for reporting

    - by Patrick J Collins
    I would like to create a read-only copy of my SQL Server 2008 database on a secondary server for reporting and analysis. I've been testing log shipping, configured to run every 5 minutes or so. Alas, there appears to be a stumbling block, for exclusive access is required on the target database during the restore, which in turn requires killing all active connections. This is far from ideal, especially if a user is in the middle of running a report. Any better suggestions? Edit : I'm doing this on the Express edition.

    Read the article

  • easy Switching to open folders on a mac

    - by Charles
    How do I easily switch to an open folder on a mac? In windows, which I'm used to using, I can see all my opened folders in my vertical taskbar, all i need to do to switch to another window is click on the folder in the task bar. There's no taskbar in mac, and when i have a lot of folders opened, ie, lots of finder windows, how can I switch between them? The way i'm doing it is, i put expose on an active corner and switch that way. However that's still damn hard, because first i have to bring up expose, and then find my window. The folders are placed in a random position between opened apps, the folders are not in a list, and on a big screen i have to scan the whole screen in order to find the one i want... etc. Is it really this hard just to switch to a different folder on a mac? :(

    Read the article

  • Postfix performance

    - by Brian G
    Running postfix on ubuntu, sending alot of mail ( ~ 1 million messages ) per day. loads are extremly high but not much in terms of cpu and memory load. Anyone in a similiar situation and know how to remove the bottleneck? All mail on this server is outbound. I would have to assume the bottleneck is disk. Just an update, here is what iostat looks like: avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 0.12 99.88 0.00 0.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 12.38 0.00 2.48 0.00 118.81 48.00 0.00 0.00 0.00 0.00 sdb 1.49 22.28 72.28 42.57 629.70 1041.58 14.55 135.56 834.31 8.71 100.00 Are these numbers in line with the performance you would expect from a single disk? sdb is dedicated to postfix. I think it is queue shuffling, from incoming-active-deferred More details from questions: Server: Quad core Xeon(R) CPU E5405 @ 2.00GH with 4 GB ram Load average: 464.88, 489.11, 483.91, 4 cores. but the memory utilization and cpu is minimal Postfix instances between 16 - 32

    Read the article

  • Problems locating Redmine database

    - by zordor
    I have an active redmine but I can not find the database where it is running right now. It should be on PostgreSQL but the database where it should be running is empty. Does anybody have any idea how to check current database used by redmine? Please let me know if you need any extra information. Thank you EDIT: Ok I know the database it is using. On the database.yml I have project_redmine but it is using the database project I dont know why. That database it is used by developers for the actual project. So that is getting me problems of course. I am unable to run it on the right DB (project_redmine) any ideas? :S

    Read the article

  • load class not in classpath dynamically in web application - without using custom classloader

    - by swdeveloper
    I am developing a web application. The web application generates java classes on the fly. For example it generates class com.people.Customer.java In my code, I dynamically compile this to get com.people.Customer.class and store in some directory say repository/com/people/Customer.class which is not on the classpath of my application server.My application server(I am using WebSphere Application Server/Apache Tomcat etc) picks up the classes from the WEB-INF/classes directory. The Classloader would use this to load the classes. After compilation I need to load this class so that it becomes accessible to other classes using it after its creation. 4.When I use Thread.currentThread().getContextClassLoader().loadClass(com.people.Customer) obviously the Classloader is not able to load the class, since its not on the classpath(not in WEB-INF/classes). Due to similar reasons, getResource(..) or getResourceAsStream(..) also does not work. I need a way to : Read the class Customer.class maybe as a stream (or any other way would do) and then load it. Following are the constraints: I cannot add the repository folder to the WEB-INF/classes folder. I cannot create a new Custom ClassLoader. If I create a new ClassLoader and this loads the class, it will not be accessible to its parent ClassLoader. Is there any way of achieving this? If not this, in the worse case, is there a way of overriding the default class loader with a custom class loader for web applications the same classloader should be used to load applications throughout entire lifecycle of my web application. Appreciate any solution :)

    Read the article

  • php.ini file creation in server

    - by tibin mathew
    Hi, I am developing a php website. I cant see php.ini file in my server. my host will not provide it. So i'm now going to create a copy of that php.ini file. so i have tried system() i searched in google and i got this link. http://drupal.org/node/290592 Here they are using like this system("cp /usr/local/php5/lib/php.ini /home/YOURUSERNAME/php.ini"); in my server when i looked the phpinfo my php.ini location is /ms/svc/php5/conf/php.ini and when i looked for the current directory of my server using this $dir_path = str_replace( $_SERVER['DOCUMENT_ROOT'], "", dirname(realpath(FILE)) ) . DIRECTORY_SEPARATOR; i got my current directory as /netapp/whnas-swamp/s11/s11/01712/www.sample.com/webdocs/ so my system command will now look lke this system("/ms/svc/php5/conf/php.ini /netapp/whnas-swamp/s11/s11/01712/www.sample.com/webdocs/php.ini"); i have named this page as getthephpini.php and when i b rowse this page i got a blank page but no new php.ini file created in my server. Is any mistake in that code??? Can any one show me the correct way Please help me Thanks

    Read the article

  • No win 7 users available for login after dell datasafe factory reset

    - by user897052
    I created install discs using the Dell datasafe 2.0 backup utility in order to re-install windows on a friend's laptop (dell inspiron n5110). I ran the discs to do a factory reset. after the whole process, it booted, started loading windows 7, displayed the messages "setup is preparing your computer for first use" and "setup is checking video performance," and showed the login screen. However, there don't seem to be any active users on the machine - I opened a command prompt window to check the users on the machine. Using the command prompt (again, from the login window), i activated/enabled the administrator account, and even created another admin account, and upon logging in received several errors, couldn't load any mmc's, etc. any help would be appreciated.

    Read the article

  • Adding entries to the context menu and organising them in Windows 7

    - by Ultra
    So I've got the hang of adding keys to HKEY_CLASSES_ROOT\Directory\Background\Shell, and I know I can add a string entitled 'Position' and change its value to position the entry I've made, but I can't figure out how to do three things (nor can I find anything guiding me in doing them): 1) How to put a bar on either side of an entry to separate it from other entries 2) How to position them in an exact place in the context menu (eg. above or below a certain other entry) 3) How to make an entry that brings up another list of entries (like the 'View' and 'Sort by' entries that are already there when you right-click in Windows Explorer I wasn't sure whether this goes in StackOverflow or SuperUser, but I thought maybe it goes here since I'm using Regedit rather than coding it (though I am aware you can right a .reg file and then execute it to install these sorts of things). Thanks!

    Read the article

  • How to disable Mac OS X from using swap when there still is "Inactive" memory?

    - by Motin
    A common phenomena in my day to day usage (and several other's according to various posts throughout the internet) of OS X, the system seems to become slow whenever there is no more "Free" memory available. Supposedly, this is due to swapping, since heavy disk activity is apparent and that vm_stat reports many pageouts. (Correct me from wrong) However, the amount of "Inactive" ram is typically around 12.5%-25% of all available memory (^1.) when swapping starts/occurs/ends. According to http://support.apple.com/kb/ht1342 : Inactive memory This information in memory is not actively being used, but was recently used. For example, if you've been using Mail and then quit it, the RAM that Mail was using is marked as Inactive memory. This Inactive memory is available for use by another application, just like Free memory. However, if you open Mail before its Inactive memory is used by a different application, Mail will open quicker because its Inactive memory is converted to Active memory, instead of loading Mail from the slower hard disk. And according to http://developer.apple.com/library/mac/#documentation/Performance/Conceptual/ManagingMemory/Articles/AboutMemory.html : The inactive list contains pages that are currently resident in physical memory but have not been accessed recently. These pages contain valid data but may be released from memory at any time. So, basically: When a program has quit, it's memory becomes marked as Inactive and should be claimable at any time. Still, OS X will prefer to start swapping out memory to the Swap file instead of just claiming this memory, whenever the "Free" memory gets to low. Why? What is the advantage of this behavior over, say, instantly releasing Inactive memory and not even touch the swap file? Some sources (^2.) indicate that OS X would page out the "Inactive" memory to swap before releasing it, but that doesn't make sense now does it if the memory may be released from memory at any time? Swapping is expensive, releasing is cheap, right? Can this behavior be changed using some preference or known hack? (Preferably one that doesn't include disabling swap/dynamic_pager altogether and restarting...) I do appreciate the purge command, as well as the concept of Repairing disk permissions to force some Free memory, but those are ways to painfully force more Free memory than to actually fixing the swap/release decision logic... Btw a similar question was asked here: http://forums.macnn.com/90/mac-os-x/434650/why-does-os-x-swap-when/ and here: http://hintsforums.macworld.com/showthread.php?t=87688 but even though the OPs re-asked the core question, none of the replies addresses an answer to it... ^1. UPDATE 17-mar-2012 Since I first posted this question, I have gone from 4gb to 8gb of installed ram, and the problem remains. The amount of "Inactive" ram was 0.5gb-1.0gb before and is now typically around 1.0-2.0GB when swapping starts/occurs/ends, ie it seems that around 12.5%-25% of the ram is preserved as Inactive by osx kernel logic. ^2. For instance http://apple.stackexchange.com/questions/4288/what-does-it-mean-if-i-have-lots-of-inactive-memory-at-the-end-of-a-work-day : Once all your memory is used (free memory is 0), the OS will write out inactive memory to the swapfile to make more room in active memory. UPDATE 17-mar-2012 Here is a round-up of the methods that have been suggested to help so far: The purge command "Used to approximate initial boot conditions with a cold disk buffer cache for performance analysis. It does not affect anonymous memory that has been allocated through malloc, vm_allocate, etc". This is useful to prevent osx to swap-out the disk cache (which is ridiculous that osx actually does so in the first place), but with the downside that the disk cache is released, meaning that if the disk cache was not about to be swapped out, one would simply end up with a cold disk buffer cache, probably affecting performance negatively. The FreeMemory app and/or Repairing disk permissions to force some Free memory Doesn't help releasing any memory, only moving some gigabytes of memory contents from ram to the hd. In the end, this causes lots of swap-ins when I attempt to use the applications that were open while freeing memory, as a lot of its vm is now on swap. Speeding up swap-allocation using dynamicpagerwrapper Seems a good thing to do in order to speed up swap-usage, but does not address the problem of osx swapping in the first place while there is still inactive memory. Disabling swap by disabling dynamicpager and restarting This will force osx not to use swap to the price of the system hanging when all memory is used. Not a viable alternative... Disabling swap using a hacked dynamicpager Similar to disabling dynamicpager above, some excerpts from the comments to the blog post indicate that this is not a viable solution: "The Inactive Memory is high as usual". "when your system is running out of memory, the whole os hangs...", "if you consume the whole amount of memory of the mac, the machine will likely hang" To sum up, I am still unaware of a way of disabling Mac OS X from using swap when there still is "Inactive" memory. If it isn't possible, maybe at least there is an explanation somewhere of why osx prefers to swap out memory that may be released from memory at any time?

    Read the article

  • vector quality of svg and pdf

    - by Kasper
    I'm converting pdf files to svg as it is easier to use svg files on webpages. I first thought the quality of svg must be similar to pdf, as they are both vector graphics. However, now I look a little better on it, it seems that pdf is a bit superior: (https://dl.dropboxusercontent.com/u/58922976/Photos/1.png) I wonder if I could change this in some way. Is this because pdf vectors are just better quality ? Or is this because chrome renders svg in lower quality than adobe reader renders pdf ? Is this a setting in the svg file that I could change ? Here is the pdf file: https://dl.dropboxusercontent.com/u/58922976/syllabusLinAlg2012.59.pdf And here is the svg file: (https://dl.dropboxusercontent.com/u/58922976/syllabusLinAlg2012.59.svg) I've made this svg file in illustrator, and only chrome is able to use the embedded svg fonts. So firefox and internet explorer won't give the expected result.

    Read the article

  • UEC - Can the Cluster Controller and Storage Controller be seperate systems?

    - by Jeremy Hajek
    My department is implementing an Ubuntu Enterprise Cloud. I have done the testing and am quite comfortable with the 4 pieces, CC/SC, CLC, WS, NC. Looking at various documents below it appears the the Storage Controller and Cluster Controller (eucalyptus-sc and eucalyptus-cc) are always installed on the same system. My question is this: can I install the storage controller and the cluster controller on separate systems? http://open.eucalyptus.com/wiki/EucalyptusAdvanced_v2.0 the picture indicates that cc and sc are two different machines http://www.canonical.com/sites/default/files/active/Whitepaper-UbuntuEnterpriseCloudArchitecture-v1.pdf P.10 1st paragraph uses the word "machine(s)" http://software.intel.com/file/31966 P. 8 indicates the same separate architecture BUT... https://help.ubuntu.com/community/UEC/PackageInstallSeparate indicates below that the SC and CC are to be on the same system.

    Read the article

  • Not able to connect to a mac client from a windows machine

    - by Manish
    I have a Server.exe file which I use to connect to a mac.(I am fairly confident that server.exe is not buggy ).When i try to do this I get this often cited error "No connection could be made because the target machine actively refused it " I did search some existing questions about this on the forum and it looked like this might be a firewall issue.FWIW I dont have any firewall set on my mac (client) and on my server machine (Windows 7 64 bit ) under the firewall settings I have :- Incoming connections : Block all connections to programs that are not on the list of allowed programs. Active Domain Networks: Same domain as the one which my client is on. Windows Firewire State: Off. Do you think i need to change something here?Can someone help me with next steps?

    Read the article

  • FTP transfer hangs for random files

    - by hoffmandirt
    I've been stuck on this FTP issue for a while now. I have IIS 7 setup with an IIS 6 FTP server running on a Windows Server 2008 box. The problem I am running into is that I can't download certain files from the FTP server, even though I uploaded those files to the FTP server. The connection times out after 120 seconds. I have used Wireshark and checked the log files. The only message I see is the timeout message. The first thing that came to my mind was permission issues, however I have probably tried every combination of permissions that I can think of, with the end goal of getting the permissions to be the same for the files that work and the files that do not work. With the list of files I have now, I can download the zip, war, and msi files, but not the txt or sql files. It almost seems like a binary thing, but I've changed my transfer mode on the FTP client and also toggled the Active/Passive options around.

    Read the article

  • Merit and demerits for various Linux fiberchannel multipath options

    - by wzzrd
    On our Linux servers, we currently use HPs qla2xxx drivers, because it has multipathing (active/passive) built in. The are, however, various other options, like Red Hats device-mapper-multipath with the stock qla2xxx drivers (multibus and failover) and things like SecurePath and PowerPath (both of which can do trunking, iirc). Can someone tell me what the merits and demerits of the various options are (if I can ask such a question), besides the obvious fact that the {Secure,Power}Path options cost vast amounts of money? I'm mainly interested in the freely available options, like HPs qla2xxx vs. Red Hats multipathd and possible other open source solutions, but I would like to hear good reasons to go for the commercial solutions too. UPDATE: I'll be benchmarking various options the coming few days (the average of 10 runs of iozone for each option (options being native qla2xxx failver, native qla2xxx multibus, HP qla2xxx failover)). I'll post a summary of results here for those interested.

    Read the article

  • How to let m2eclipse use nexus repositories instead of maven one

    - by lisak
    I have this situation: An artifact in maven local repo that I don't want to use anymore. Instead, I want it to be downloaded by maven from proxied nexus remote repository. It's a typical situation cause a lot of artifacts are called just name-SNAPSHOT and the artifact is changing but the name is still the same. Eclipse with m2eclipse is running. I delete the entire directory of the artifact in local maven repo m2eclipse "Reindex local maven repository" - which creates a new nexus index for local maven repo I guess Project - maven Update Dependencies - now m2eclipse should run maven, which doesn't see the artifact in local maven repo, so it uses nexus repositories to download it (expected behavior) Instead, the directory structure in maven local repo is recreated and there is this file: "m2e-lastUpdated.properties" with following inside: local|http\://nexus\:8082/nexus-webapp-1.6.0/content/groups/public|javadoc=1274399332215 local|http\://nexus\:8082/nexus-webapp-1.6.0/content/groups/public|sources=1274399332161 and m2eclipse says Missing artifact net.sourceforge.htmlunit:htmlunit:jar:2.8-SNAPSHOT:compile even though the artifact physically exists here: nexus:8082/nexus-webapp-1.6.0/content/repositories/htmlunit-snapshot/net/sourceforge/htmlunit/htmlunit/2.8-SNAPSHOT/htmlunit-2.8-SNAPSHOT.jar Maven just doesn't use this location at all. Trust me I tried everything, this m2eclipse behavior is terrible.

    Read the article

  • ASPX FormsAuthentication.RedirectFromLoginPage function is not working anymore

    - by Mike Webb
    Here is my issue. I have an ASPX web site and I have code in there to redirect from the login page with the call to "FormsAuthentication.RedirectFromLoginPage(username, false);" This sends the user from the root website folder to 'website/Admin/'. I have a 'default.aspx' page in 'website/Admin/' and the call to redirect works on a previous version of the website we have running currently, but the one that I am updating on a separate test server is not working. It gives me the error "Directory Listing Denied. This Virtual Directory does not allow contents to be listed." I have this in the config file: <authorization> <allow users="*" /> </authorization> under the "authentication" option and... <location path="Admin"> <system.web> <authorization> <deny users="?" /> </authorization> </system.web> </location> for the location of Admin. Also, there is no difference in the code between the web.config, Login.aspx, or the default.aspx files on the current server and the one on the test server, so I am confused as to why the redirect will not work on both. It even works in the Visual Studio server environment, for which the code is also identical. Any suggestions and help is appreciated.

    Read the article

  • How to create systemd.service in Fedora 16 (x86_64)?

    - by marverix
    I have big problem with creating service in new way - by systemctl (systemd.service) in Fedora 16. I wonna to create very simple service for minidlna server. I have created new file called minidlna.service in /lib/systemd/system/ and here is how it's looks like: [Unit] Description=Mini DLNA [Service] Type=oneshot ExecStart=/usr/sbin/minidlna [Install] WantedBy=multi-user.target Unfortunately systemctl status minidlna.service prints: Loaded: loaded (/lib/systemd/system/minidlna.service; enabled) Active: inactive (dead) since Sat, 03 Dec 2011 20:49:23 +0100; 9s ago Main PID: 1580 (code=exited, status=0/SUCCESS) CGroup: name=systemd:/system/minidlna.service Any ideas how to fix it? Cheers!

    Read the article

< Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >