Search Results

Search found 4609 results on 185 pages for 'master of celebration'.

Page 55/185 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • AuthBasicProvider: failover not working when the first LDAP is down?

    - by quanta
    I've been trying to setup redundant LDAP servers with Apache 2.2.3. /etc/httpd/conf.d/authn_alias.conf <AuthnProviderAlias ldap master> AuthLDAPURL ldap://192.168.5.148:389/dc=domain,dc=vn?cn AuthLDAPBindDN cn=anonymous,ou=it,dc=domain,dc=vn AuthLDAPBindPassword pa$$w0rd </AuthnProviderAlias> <AuthnProviderAlias ldap slave> AuthLDAPURL ldap://192.168.5.199:389/dc=domain,dc=vn?cn AuthLDAPBindDN cn=anonymous,ou=it,dc=domain,dc=vn AuthLDAPBindPassword pa$$w0rd </AuthnProviderAlias> /etc/httpd/conf.d/authz_ldap.conf # # mod_authz_ldap can be used to implement access control and # authenticate users against an LDAP database. # LoadModule authz_ldap_module modules/mod_authz_ldap.so <IfModule mod_authz_ldap.c> <Location /> AuthBasicProvider master slave AuthzLDAPAuthoritative Off AuthType Basic AuthName "Authorization required" AuthzLDAPMemberKey member AuthUserFile /home/setup/svn/auth-conf AuthzLDAPSetGroupAuth user require valid-user AuthzLDAPLogLevel error </Location> </IfModule> If I understand correctly, mod_authz_ldap will try to search users in the second LDAP if the first server is down or OpenLDAP on it is not running. But in practice, it does not happen. Tested by stopping LDAP on the master, I get the "500 Internal Server Error" when accessing to the Subversion repository. The error_log shows: [11061] auth_ldap authenticate: user quanta authentication failed; URI / [LDAP: ldap_simple_bind_s() failed][Can't contact LDAP server] Did I misunderstand?

    Read the article

  • Setting up DNS using VirtualMin/WebMin

    - by Nyxynyx
    I am moving from a cPanel server to one where I've installed VirtualMin. The LAMP stack and the website files have been setup properly and I can access the website by its IP address. Problem: Now its time to point my domain mydomain.com to my new server. After reading many sites describing setting up bind and master zones, I am pretty confused as to what to do, especially coming from a cPanel server where its really simple to set this up. Attempt Tried to register my nameservers ns1.mydomain.com and ns2.mydomain.com at my domain registrar, but I am missing the IPs I need to point these nameservers to. Should I set ns1.mydomain.com to the IP addres of my web server, and not register ns2.mydomain.com? When specifying the DNS for mydomain.com, the first one I've set it to ns1.apadment.com. On the manager/admin page of my webhost provider, I am given the option to create a secondary slave DNS, which I assigned to the IP address of my server. Though I am not sure how the slave DNS will copy the info from my web server? I have assigned this secondary DNS ns.hostprovider.com as the second DNS for mydomain.com I tried creating a Virtual Server under Virtualmin, but it seems to mess up Apache's DocumentRoot for the site by creating and enabling a new vhost file that ends with .conf. I edited the .conf file to point DocumentRoot back to where its supposed to be /var/www/mydomain instead of /user/mydomain.com I believe the next step is to setup the zone. Virtualmin has already created a Master Zone with 8 different addresses (www.mydomain.com, ftp.mydomain.com...). Under Nameservers, there are already 2 records. One is the hostname (random name given by hostprovider, ns12345.ip123-123.net), the other is the secondary slave DNS provided by the host provider. Does having BIND running on my web server makes the server the master DNS? Thank you!

    Read the article

  • Why doesn't the Windows 7 volume mixer remember per-application levels for all applications?

    - by mdives
    If I have the device's master level set to 50, and then lower an application level to 25; Once I close that application and reopen it, the volume levels should persist. The master level should remain at 50 and the application's at 25. This does happen for most applications. However, for one in particular, Napster, it does not. I subscribe to Napster's streaming service. I use the Napster desktop application to connect to that service. Every time that I open the Napster app, I have to adjust the application's volume level down in the volume mixer. When I open the app again after closing it, I have to do the same thing, the volume mixer is not remembering the set level. In fact. The level is reset back to 50, the same level as the device's master level. Has anyone else experienced this, with Napster or any other application? Is there a solution or is this a known issue?

    Read the article

  • Postfix does not work after setting server hostname from plesk

    - by Michael
    I have recently set my server hostname from localhost.localdomain to xx.mydomain.com from the plesk control panel. However after doing this change postfix has stopped working, I tried restarting and regenerating the config files but to no avail. I am not familiar with postfix but I believe there is a setting to be changed in main.cf. Here are the relevant errors I receive: postfix/postfix-script: starting the Postfix mail system postfix/master: daemon started -- version 2.8.4, configuration /etc/postfix postfix/cleanup: fatal: host/service localhost/12768 not found: Name or service not known postfix/pickup: warning: maildrop/E5559996219: error writing 4FA2C996217: queue file write error postfix/master: warning: process /usr/libexec/postfix/cleanup pid 15334 exit status 1 postfix/master: warning: /usr/libexec/postfix/cleanup: bad command startup -- throttling Any ideas? EDIT Setting it back to localhost.localdomain makes it work again. The only references to localhost/12768 I can find in main.cf are: smtpd_milters = inet:localhost:12768 non_smtpd_milters = inet:localhost:12768 Should something be changed here? These two lines stay the same when I change the hostname. EDIT If I comment out the two mail filter lines (the ones shown above) and restart, postfix works with the new hostname. However this is obviously not the ideal solution...

    Read the article

  • Users database empty after Samba3 to Samba4 migration on different servers

    - by ouzmoutous
    I have to migrate a Samba 3 to a new Samba 4 server. My problem is that the database on the samba 3 server seems a bit empty. The secrets.dtb file is only 20K whereas the “pbedit -L |wc -l”command give me 16970 lines. On my Samba3 /var/lib/samba is 1,5M After I had migrate the databse (following instructions on http://dev.tranquil.it/index.php/SAMBA_-_Migration_Samba3_Samba4), “pdbedit -L” command on the new server give me only : SAMBA4$, Administrator, dns-samba4, krbtgt and nobody. So I tried to create a VM with a Samba3. I added some users, done the same things I did for the migration and now I can see the users created on the VM. It’s like users on the Samba 3 server are in a sort of cache. I already migrate the /etc/{passwd,shadow,group} files and I can see users with the “getent passwd” command. Any ideas why my users are present when I use pdbedit but the database is so empty ? The global part of my smb.conf on the Samba 3 server : [global] workgroup = INTERNET netbios name = PDC-SMB3 server string = %h server interfaces = eth0 obey pam restrictions = Yes passdb backend = smbpasswd passwd program = /usr/bin/passwd %u passwd chat = *new* %n\n *Re* %n\n *pa* username map = /etc/samba/smbusers unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%U max log size = 1000 socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192 add user script = /usr/sbin/useradd -s /bin/false -m '%u' -g users delete user script = /usr/sbin/userdel -r '%u' add group script = /usr/sbin/groupadd '%g' delete group script = /usr/sbin/groupdel '%g' add user to group script = /usr/sbin/usermod -G '%g' '%u' add machine script = /usr/sbin/useradd -s /bin/false -d /dev/null '%u' -g machines logon script = logon.cmd logon home = \\$L\%U domain logons = Yes os level = 255 preferred master = Yes local master = Yes domain master = Yes dns proxy = No ldap ssl = no panic action = /usr/share/samba/panic-action %d invalid users = root admin users = admin, root, administrateur log level = 2

    Read the article

  • Permission / owner issue with pushing to git when editing directly from repo?

    - by Susan
    I have a web interface for deploying scripts from our repo at Github to our live server. The web interface just triggers a bash script with some git commands. If I make changes locally, push to repo, then run the bash script to pull from repo to live it works fine. However, if I make changes directly in the repo (via Github's web interface), I'm running into fast-forward / lock issues. These are the steps I'm taking: Make a change on a file at Github repo Run a bash script (as apache) via web from live server that attempts a git push / pull. Get these problems: PUSH To [email protected]:name/name.git ! [rejected] master - master (non-fast-forward) error: failed to push some refs to '[email protected]:name/name.git' To prevent you from losing history, non-fast-forward updates were rejected Merge the remote changes before pushing again. See the 'Note about fast-forwards' section of 'git push --help' for details. PULL From github.com:name/name branch master - FETCH_HEAD error: unable to unlink old 'includes/footer.inc' (Permission denied) Updating 8f6d922..d1eba9d Updating 8f6d922..d1eba9d SSH in as root, attempt a push / pull and it works fine. Ideas on why would this method not work from apache?

    Read the article

  • Monitoring multiple sites on a single server using OpsView

    - by Kev
    We have several web servers. On each of these servers there can be ~250 web sites. I need to add a HTTP check for each site on each server. Each site has a reserved host header that we know can always be resolved in the format of: w10000.hostchecks.mycompany.com w10020.hostchecks.mycompany.com w11992.hostchecks.mycompany.com ..and so on.. What I want is for there to be a master ping check on the web server's main IP address and then separate HTTP checks for each of the sites on the server. If the master ping test fails then I want the HTTP tests to cease until the master ping check goes OK. I had a stab at this and tried do the following: Create a parent host that does a ping check on the server's main ip address (e.g. server is named WEB0001). For each of the sites that reside on WEB0001: Create a separate Host with a Primary Hostname of wXXXXX.hostchecks.mycompany.com Make WEB0001 the parent host Add a monitor (HTTP check to a special url that is mapped into each site using a virtual directory: H- $HOSTADDRESS$ -u /__hostcheck/IsAlive.aspx -w 5 -c 10 -p 80 However I find that if I down the parent server (WEB0001) the http checks seem to continue. Am I going about this completely the wrong way?

    Read the article

  • Multiple LDAP servers with mod_authn_alias: failover not working when the first LDAP is down?

    - by quanta
    I've been trying to setup redundant LDAP servers with Apache 2.2.3. /etc/httpd/conf.d/authn_alias.conf <AuthnProviderAlias ldap master> AuthLDAPURL ldap://192.168.5.148:389/dc=domain,dc=vn?cn AuthLDAPBindDN cn=anonymous,ou=it,dc=domain,dc=vn AuthLDAPBindPassword pa$$w0rd </AuthnProviderAlias> <AuthnProviderAlias ldap slave> AuthLDAPURL ldap://192.168.5.199:389/dc=domain,dc=vn?cn AuthLDAPBindDN cn=anonymous,ou=it,dc=domain,dc=vn AuthLDAPBindPassword pa$$w0rd </AuthnProviderAlias> /etc/httpd/conf.d/authz_ldap.conf # # mod_authz_ldap can be used to implement access control and # authenticate users against an LDAP database. # LoadModule authz_ldap_module modules/mod_authz_ldap.so <IfModule mod_authz_ldap.c> <Location /> AuthBasicProvider master slave AuthzLDAPAuthoritative Off AuthType Basic AuthName "Authorization required" AuthzLDAPMemberKey member AuthUserFile /home/setup/svn/auth-conf AuthzLDAPSetGroupAuth user require valid-user AuthzLDAPLogLevel error </Location> </IfModule> If I understand correctly, mod_authz_ldap will try to search users in the second LDAP if the first server is down or OpenLDAP on it is not running. But in practice, it does not happen. Tested by stopping LDAP on the master, I get the "500 Internal Server Error" when accessing to the Subversion repository. The error_log shows: [11061] auth_ldap authenticate: user quanta authentication failed; URI / [LDAP: ldap_simple_bind_s() failed][Can't contact LDAP server] Did I misunderstand? AuthBasicProvider ldap1 ldap2 only means that if mod_authz_ldap can't find the user in ldap1, it will continue with ldap2. It doesn't include the failover feature (ldap1 must be running and working fine)?

    Read the article

  • Multimaster Keepalived Configuration (Virtual IP with Load Balancing)

    - by Rad Akefirad
    Here are requirements: 1. High Availability 2. Load Balancing First configuration 1. Two linux servers have been configured with one static IP for each: 10.17.243.11, 10.17.243.12 2. Keepalived has been installed and configured with one VRRP instance to provide one virtual IP (10.17.243.10 as VIP, 10.17.243.11 as master and 10.17.243.12 as backup). 3. Everything works fine. The VIP is assigned to the master server (10.17.243.11) as long as it is up and running. As soon as it goes down, the VIP will be assigned to the backup server (10.17.243.12). 4. The problem here is all communication goes to the master server. Second configuration 1. I found active-active configuration for Keepalived which is possible by defining more than one VRRP instance. So that both server have two IPs (real 10.17.243.11 and virtual 10.17.243.10 for server #1 and real 10.17.243.12 and virtual 10.17.243.20 for server #2. 2. Everything works fine. we have two VIPs which are accessible (HA). But all communication coming to each IP still goes to one single machine (either server #1 or #2 depending on the IP). However I found some tricks on the DNS to overcome this limitation. But it's not acceptable in our case. Question: Is there any way to have one virtual IP which is assigned to both servers? By that I mean both servers are handling some part of workload (like the thing we do in web server load balancing)? By using either keepalived or some other tools? Thanks in advance.

    Read the article

  • Procurve Primary VLAN

    - by fukawi2
    I'm trying to depreciate usage of VLAN 1 on my ProCurve switches; 1 is unused. I understand that VLAN 1 must exist, but I want to remove it from all ports, especially trunks between switches. The problem I have is that stacking does not seem to work without VLAN 1. I have changed the primary VLAN and management VLAN on all the switches: (config)# primary-vlan 42 (config)# management-vlan 42 (config)# no vlan 1 untagged 25 Port 25 is the link between the 2 switches I'm testing with; the stack master and a member switch; I only want tagged traffic between the switches, no untagged frames. show stacking on the master shows all members as "UP" but I can not telnet any of them: Telnet failed: Connection timed out. All switches have manually assigned (static) IP addresses on VLAN 42, and all exist in the same /25 subnet, as does my desktop. I can telnet the switches directly from my desktop to the individual switch IP addresses, just not from the master switch. Do I need to reboot the switches to have the primary-vlan change take effect? Or is there something else I'm missing?

    Read the article

  • Replacing HD in an MacOS 10.6.8 server caused all shares to fail

    - by Cheesus
    I'm hoping someone might have a helpful suggestion about this problem. We have 2 MacOSX servers available for file sharing. (quad Xeons - 2GB RAM, both 10.6.8), No.1 is an Open Directory Master with 50+ user accounts, No.2 has only 2 local accounts (/local/Default) and looks at the OD Master for all user accounts (/LDAPv3/10.x.x.20/) Both servers have 3 internal HD's, The boot volume with only Server OS and minimal Apps. A 'DataShare' HD (500GB) and a backup drive (500GB). After upgrading the DataShare HD in Server No.2 from a small internal HD (500GB) to larger capacity (2TB) drive, users are unable to connect to shares on Server No.2. Users get an error "There are no shares available or you are not allowed to access them on the server" The process I followed was to use Carbon Copy Cloner to create an exact copy of the original data drive (keeps all ownership data, UID, permissions, last edit date and time). Everything booted up ok, no indication there was any issues. (Paths to the sharepoint look good) Notes during troubleshooting - Server1 is operating perfectly, all users can access shares and authenticate etc. - I've checked the SACL (Server Access Control List) settings is ok. - On Server2 in the Server Admin' app, I can see all the shares listed ok. The paths seem valid, I can disable / reenable the shares, no errors. - On Server2 'workgroup manager' lists all the accounts from the OD Master in the LDAP dir view. All seems fine from here. Basically everything looks normal but no file shares on Server2 can be accessed from regular users.

    Read the article

  • HD working with IDE USB adapter but not recognised by bios

    - by Rajeeva
    I have a Windows XP Pentium III desktop with two hard drives. The first one has the OS and is luckily working. The second drive on the secondary master IDE channel few days back was unable to read some files and since then for some time it was failing and reviving intermittently and now it is always showing as failed on the IDE channel When the HD was intermittenly failing, I was able to copy some data from it to the other drive - also during that time if the system was running and the hard disk failed at that time, the system froze and then i had to reboot. then I got a new 80 gb hdd similar (same make - seagate barracuda) to the earlier failing one, a new data cable for the drive and an IDE to USB adapter. the new hard drive i installed in the previous drive's place (secondary master), formatted it and it worked for 1 day and then it also failed - simultaneously i connected the old hd to the IDE/USB adapter and i could view all the data - some of that data i was able to back up from the old hd to the new hd before the new hd failed the new hd i have tried connecting on the primary channel as the slave disk but when i do that then the bios does not detect either the OS drive or the new drive and the system does not boot surprisingly, the older (previously failed) hd and the new hd are both working fine on the usb channel with the IDE/USB adapter. i have ruled out any problem with the secondary channel since the dvd rom i was earlier using as primary slave have now connected to secondary master and it works fine. i am really confused by this behavior on my system. please can anybody try to solve this for me. thanks.

    Read the article

  • Are these hardwares compatible?

    - by Tom Kaufmann
    I am trying to upgrade my new machine but I want to do it myself. This is my 1st attempt at building system. After carefully reading reviewing feedback and my budget I have decided to select the below listed components. Can anybody let me know are they compatible or not? Transcend 64 GB 2.5" SATA Solid State Drive Asus GeForce GTX550 1GB DDR5 ENGTX550 TI DI/1GD5 Graphics Card Seagate Barracuda 1 TB HDD Internal Hard Drive Cooler Master eXtreme Power Pro 600 Power Supply Intel Core i5 2500K Sandy Bridge 3.30 GHz 95 W 4 Core Desktop Processor Intel DX79TO Motherboard Corsair CMZ8GX3M2A1600C9 8 GB DDR3 SDRAM 1600 MHz Dual Channel Kit Desktop Memory Sony AD-7260S-ZS Internal DVD Writer - Black Cooler Master Hyper TX3 EVO Intel CPU Cooler Cooler Master Elite 335U Cabinet LG E2051T 20.1 Inch SuperSlim Monitor Is any of these hardware components incompatible with I5 2500K? If you have any other suggestions for selecting any other harwdware that can boost up my performance or lower my cost while having the same performance, please suggest. But my primary questions is whether they are compatible or not! Any help is appreciated. Thank you.

    Read the article

  • What method of MySQL mirroring should I use for this?

    - by user45745
    I'm running an web application hosting service (basically hosting forums for free), and I have two remote servers at my disposal. The code for the application is stored on both servers and isn't a problem, but I'm wondering how to deal with the databases. When someone goes onto a site *.example-host.com, they are sent to one of the two servers and both must be capable of loading the forums from a database. The database must also have write access, for when new members register or post topics etc. The main requirement is speed, but uptime is also important (if a server goes out, the site should still work). I have a few options, but I'm inexperienced and not sure which to go with: 1) [PHP] Split the forum records 50:50 between the two servers. If a server does not have the record for a forum requested, it can request it from the other by remote MySQL and load it. This idea sounded okay, until I realised that 50% of the time, users would be waiting significantly longer for pages to load. I also realised that if one of the servers went down, half the forums would be inaccessible and registrations would have to be disabled. 2) [MySQL] Dual master replication. This would attempt to mirror the two databases and sounds perfect, but I've heard that it can be very problematic. I don't know how fast this is. 3) [MySQL] Use a standard replication, distribute read only queries on both nodes and read/write queries to the master. This sounds like a good option, but again, I'm not sure on speed. I also don't know what would happen if the master server went down. If you have any other suggestions, please post them :)

    Read the article

  • Heavily customized split view controller in iPad app -- how?

    - by Macatomy
    I was going through some of the early screenshots of the first iPad apps and I came upon this: I'm just wondering, how is this done? Mainly, how has the detail view section of the split view been given a drop shadow and rounded corners, and for lack of better phrasing, how has it been "separated" from the master view (the default split view template has the master and detail view joined together with nothing but a vertical line separating the two)?

    Read the article

  • NSFetchedResultsController fetch request - updating predicate and UITableView

    - by Macatomy
    In my iPhone Core Data app I have it configured in a master-detail view setup. The master view is a UITableView that lists objects of the List entity. The List entity has a to-many relationship with the Task entity (called "tasks"), and the Task entity has an inverse to-one relationship with List called "list". When a List object is selected in the master view, I want the detail view (another UITableView) to list the Task objects that correspond to that List object. What I've done so far is this: In the detail view controller I've declared a property for a List object: @property (nonatomic, retain) List *list; Then in the master view controller I use this table view delegate method to set the list property of the detail view controller when a list is selected: - (void)tableView:(UITableView *)aTableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { NSManagedObject *selectedObject = [[self fetchedResultsController] objectAtIndexPath:indexPath]; detailViewController.list = (List*)selectedObject; } Then, I've overriden the setter for the list property in the detail view controller like this: - (void)setList:(List*)newList { if (list != newList) { [list release]; list = [newList retain]; NSPredicate *newPredicate = [NSPredicate predicateWithFormat:@"(list == %@)", list]; [NSFetchedResultsController deleteCacheWithName:@"Root"]; [[[self fetchedResultsController] fetchRequest] setPredicate:newPredicate]; NSError *error = nil; if (![[self fetchedResultsController] performFetch:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } } } What I'm doing here is setting a predicate on the fetched results to filter out the objects so that I only get the ones that belong to the selected List object. The fetchedResultsController getter for the detail view controller looks like this: - (NSFetchedResultsController *)fetchedResultsController { if (fetchedResultsController == nil) { NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Task" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"FALSEPREDICATE"]; [fetchRequest setPredicate:predicate]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedResultsController = aFetchedResultsController; [aFetchedResultsController release]; [fetchRequest release]; [sortDescriptor release]; [sortDescriptors release]; } return fetchedResultsController; } Its almost unchanged from the default in the Core Data project template, the change I made is to add a predicate that always returns false, the reason being that when there is no List selected I don't want any items to be displayed in the detail view (if a list is selected the predicate is changed in the setter for the list property). However, when I select a list item, nothing really happens. Nothing in the table view changes, it stays empty. I'm sure my logic is flawed in several places, advice is appreciated Thanks

    Read the article

  • Concatenate & Minify JS on the fly OR at build time - ASP.NET MVC

    - by Charlino
    As an extension to this question here Linking JavaScript Libraries in User Controls I was after some examples of how people are concatinating & minifying javascript on the fly OR at build time. I would also like to see how it then works into your master pages. I don't mind page specific files being minified and linked inidividually as they currently are (see below) but all the js files on the main master page (I have about 5 or 6) I would like concatenated and minified. Bonus points for anyone who also incorporates CSS concatenation & minification! :-) Current master page with the common js files that I would like concatenated & minified: <%@ Master Language="C#" Inherits="System.Web.Mvc.ViewMasterPage" %> <head runat="server"> ... BLAH ... <asp:ContentPlaceHolder ID="AdditionalHead" runat="server" /> ... BLAH ... <%= Html.CSSBlock("/styles/site.css") %> <%= Html.CSSBlock("/styles/jquery-ui-1.7.1.css") %> <%= Html.CSSBlock("/styles/jquery.lightbox-0.5.css") %> <%= Html.CSSBlock("/styles/ie6.css", 6) %> <%= Html.CSSBlock("/styles/ie7.css", 7) %> <asp:ContentPlaceHolder ID="AdditionalCSS" runat="server" /> </head> <body> ... BLAH ... <%= Html.JSBlock("/scripts/jquery-1.3.2.js", "/scripts/jquery-1.3.2.min.js") %> <%= Html.JSBlock("/scripts/jquery-ui-1.7.1.js", "/scripts/jquery-ui-1.7.1.min.js") %> <%= Html.JSBlock("/scripts/jquery.validate.js", "/scripts/jquery.validate.min.js") %> <%= Html.JSBlock("/scripts/jquery.lightbox-0.5.js", "/scripts/jquery.lightbox-0.5.min.js") %> <%= Html.JSBlock("/scripts/global.js", "/scripts/global.min.js") %> <asp:ContentPlaceHolder ID="AdditionalJS" runat="server" /> </body> Used in a page like this (which I'm happy with): <asp:Content ID="signUpContent" ContentPlaceHolderID="AdditionalJS" runat="server"> <%= Html.JSBlock("/scripts/pages/account.signup.js", "/scripts/pages/account.signup.min.js") %> </asp:Content> EDIT: What I'm using now Since asking this question, Microsoft have released their own JS & CSS compression library called Microsoft AJAX Minifier, I'd definitely recommend checking it out. It includes MSBuild tasks which are the duck's nuts.

    Read the article

  • How to deploy 2 Sharepoint page leyouts

    - by mickey
    I have a problem to deploy two page layouts, I can deploy one page layout with no problem but if I add another one the first one gets the same layout as the newly added second page layout... Here is the XML I add to Elements.xml <File Path="masterpage\CustomLayout.aspx" Url="CustomLayout.aspx" Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" > <Property Name="Title" Value="CustomLayout" /> <Property Name="ContentType" Value="$Resources:cmscore,contenttype_pagelayout_name;" /> <Property Name="PublishingPreviewImage" Value="~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;/Preview Images/CustomPageLayout.png, ~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;/Preview Images/CustomPageLayout.png" /> <Property Name="PublishingAssociatedContentType" Value=";#$Resources:cmscore,contenttype_articlepage_name;;#0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF3900242457EFB8B24247815D688C526CD44D;#"/> </File> <File Path="masterpage\HomePageLayout.aspx" Url="HomePageLayout.aspx" Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" > <Property Name="Title" Value="HomePageLayout" /> <Property Name="ContentType" Value="$Resources:cmscore,contenttype_pagelayout_name;" /> <Property Name="PublishingPreviewImage" Value="~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;/Preview Images/CustomPageLayout.png, ~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;/Preview Images/CustomPageLayout.png" /> <Property Name="PublishingAssociatedContentType" Value=";#$Resources:cmscore,contenttype_articlepage_name;;#0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF3900242457EFB8B24247815D688C526CD44D;#"/> </File> <File Path="masterpage\masterpage.master" Url="masterpage.master" Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" > <Property Name="Title" Value="My Custom masterpage" /> <Property Name="ContentType" Value="$Resources:cmscore,contenttype_pagelayout_name;" /> <Property Name="PublishingPreviewImage" Value="~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;/Preview Images/back.jpg, ~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;/Preview Images/back.jpg" /> <Property Name="PublishingAssociatedContentType" Value=";#$Resources:cmscore,contenttype_articlepage_name;;#0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF3900242457EFB8B24247815D688C526CD44D;#"/> </File> <File Path="masterpage\masterpage2.master" Url="masterpage2.master" Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" > <Property Name="Title" Value="My Custom masterpage 2" /> <Property Name="ContentType" Value="$Resources:cmscore,contenttype_pagelayout_name;" /> <Property Name="PublishingPreviewImage" Value="~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;/Preview Images/back.jpg, ~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;/Preview Images/back.jpg" /> <Property Name="PublishingAssociatedContentType" Value=";#$Resources:cmscore,contenttype_articlepage_name;;#0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF3900242457EFB8B24247815D688C526CD44D;#"/> </File>

    Read the article

  • Error with git: remote HEAD is ambiguous, may be one of the following

    - by vfclists
    After branching and pushing to the remote, a git remote show origin gives the report HEAD branch (remote HEAD is ambiguous, may be one of the following): master otherbranch What does the imply? It is a critical error? remote origin Fetch URL: [email protected]:/home/gituser/repos/csfsconf.git Push URL: [email protected]:/home/gituser/repos/csfsconf.git HEAD branch (remote HEAD is ambiguous, may be one of the following): master otherbranch

    Read the article

  • Custom ViewEngine problem with ASP.NET MVC

    - by mare
    In this question jfar answered with his solution the problem is it does not work for me. In the method FindView() I have to somehow check if the View we are requesting is a ViewUserControl. Because otherwise I get an error saying: "A master name cannot be specified when the view is a ViewUserControl." This is my custom view engine code right now: public class PendingViewEngine : VirtualPathProviderViewEngine { public PendingViewEngine() { // This is where we tell MVC where to look for our files. /* {0} = view name or master page name * {1} = controller name */ MasterLocationFormats = new[] {"~/Views/Shared/{0}.master", "~/Views/{0}.master"}; ViewLocationFormats = new[] { "~/Views/{1}/{0}.aspx", "~/Views/Shared/{0}.aspx", "~/Views/Shared/{0}.ascx", "~/Views/{1}/{0}.ascx" }; PartialViewLocationFormats = new[] {"~/Views/{1}/{0}.ascx", "~/Views/Shared/{0}.ascx"}; } protected override IView CreatePartialView(ControllerContext controllerContext, string partialPath) { return new WebFormView(partialPath, ""); } protected override IView CreateView(ControllerContext controllerContext, string viewPath, string masterPath) { return new WebFormView(viewPath, masterPath); } public override ViewEngineResult FindView(ControllerContext controllerContext, string viewName, string masterName, bool useCache) { if (controllerContext.HttpContext.Request.IsAjaxRequest()) return base.FindView(controllerContext, viewName, "Modal", useCache); return base.FindView(controllerContext, viewName, "Site", useCache); } } The above ViewEngine fails on calls like this: <% Html.RenderAction("Display", "WidgetZoneV2", new { zoneslug = "left-default-zone" }); %> As you can see, I am providing Route values to my RenderAction call. The action I am rendering here is this: // Widget zone name is unique // GET: /WidgetZoneV2/{zoneslug} public ActionResult Display(string zoneslug) { zoneslug = Utility.RemoveIllegalCharacters(zoneslug); // Displaying widget zone creates new widget zone if it does not exist yet; so it prepares our page for // dropping of content widgets WidgetZone zone; if (!_repository.IsUniqueSlug(zoneslug)) zone = (WidgetZone) _repository.GetInstance(zoneslug); else { // replace slug with spaces to convert it into Title zone = new WidgetZone {Slug = zoneslug, Title = zoneslug.Replace('-', ' '), WidgetsList = new ContentList()}; _repository.Insert(zone); } ViewData["ContentItemsList"] = _repository.GetContentItems(); return View("WidgetZoneV2", zone); } I cannot use RenderPartial (at least I don't know how) the way I can RenderAction. To my knowledge there is no way to provide RouteValueDictionary to RenderPartial() like the way you can to RenderAction().

    Read the article

  • How to specify an area name in an action link?

    - by Jeremy
    I have a shared master page which I am using from 2 different areas in my mvc 2 app. The master page has an action link which currently specifies the controller and action, but of course the link doesn't work if I'm in the wrong area. I see no overload for actionlink that takes an area parameter, is it possible to do?

    Read the article

  • Can I tell git pull to overwrite instead of merge?

    - by Michael Stum
    As far as I see, git pull someRemote master tries to merge the remote branch into mine. Is there a way to say "Completely discard my stuff, just make me another clone of the remote" using git pull? I still want to keep my own repository and keep it's history, but I want to have a 1:1 copy of someRemote's master branch after that command.

    Read the article

  • How to handle relative paths in ASP.NET MVC?

    - by AngryHacker
    I have a master page which references a style in the following manner: <link rel="stylesheet" type="text/css" href="../../Content/Style.css" /> All my pages inherit from this master page. And this works well when the URL is http://www.domain.com/home/details/5, however the URL is http://www.domain.com/home/create, then, of course, Style.css cannot be found because `../../Content/Style.css' resolves to a directory one higher where there is nothing there. How is this typically handled?

    Read the article

  • Handling file renames in git

    - by Greg K
    I'd read that when renaming files in git, you should commit any changes, perform your rename and then stage your renamed file. Git will recognise the file from the contents, rather than seeing it as a new untracked file, and keep the change history. However, doing just this tonight I ended up reverting to git mv. > $ git status # On branch master # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified: index.html # Rename my stylesheet in Finder from iphone.css to mobile.css > $ git status # On branch master # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified: index.html # # Changed but not updated: # (use "git add/rm <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # deleted: css/iphone.css # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # css/mobile.css So git now thinks I've deleted one CSS file, and added a new one. Not what I want, lets undo the rename and let git do the work. > $ git reset HEAD . Unstaged changes after reset: M css/iphone.css M index.html Back to where I began. > $ git status # On branch master # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified: index.html # Lets use git mv instead. > $ git mv css/iphone.css css/mobile.css > $ git status # On branch master # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # renamed: css/iphone.css -> css/mobile.css # # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: index.html # Looks like we're good. So why didn't git recognise the rename the first time around when I used Finder?

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >