Search Results

Search found 17771 results on 711 pages for 'dhcp option'.

Page 222/711 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • How can I get jQuery validation plugin Ketchup to stop an Ajax form submission when validation fails?

    - by Marshall Sontag
    I'm using Ruby on Rails, Formtastic gem, jQuery and ketchup to validate my form. I'm submitting the form created by Formtastic inside a modal box using ajax: <% semantic_form_remote_for @contact_form, :url => '/request/contact' do |f| %> I have a validation plugin verifying the fields on the form: $(document).ready(function() { $("#new_contact_form").ketchup(); }); The problem is that semantic_form_remote_for generates an onSubmit ajax request that the jQuery validation plugins won't prevent, since it's not a normal form submission. One question on stackoverflow suggests using :condition on the remote form declaration to fire a javascript function, but I can't do that since I'm not using a function, but rather relying on a jQuery handler. I also tried putting ketchup within a submit event handler: $(document).ready(function() { $("#new_contact_form").submit(function() { $('#new_contact_form').ketchup(); }); }); No luck. Form still submits. I also tried using the beforeSend option of jQuery.ajax: $(document).ready(function() { jQuery.ajax( { beforeSend: function(){ $('#new_contact_form').ketchup(); } }); }); Validation fires off, but form is still submitted. I switched to jQuery Validation plugin just to see if it was due to some limitation in Ketchup. It turns out that Validation has a submitHandler option: $(document).ready(function() { $('#new_contact_form').validate({ submitHandler: function(form) { jQuery.ajax({ data:jQuery.param(jQuery('#new_contact_form').serializeArray()), dataType:'script', type:'post', url:'/request/contact' }); return false; } }); }); This works when I use a regular semantic_form_for instead of semantic_form_remote_for, but alas, I would rather use Ketchup. Is Ketchup just woefully lacking? Am I forced to use jQuery Validation?

    Read the article

  • Locking issues with replacing files on a website

    - by Moe Sisko
    I want to replace existing files on an IIS website with updated versions. Say these files are large pdf documents, which can be accessed via hyperlinks. The site is up 24x7, so I'm concerned about locking issues when a file is being updated at exactly the same time that someone is trying to read the file. The files are updated using C# code run on the server. I can think of two options for opening the file for writing. Option 1) Open the file for writing, using FileShare.Read : using (FileStream stream = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.Read)) While this file is open, and a user requests the same file for reading in a web browser via a hyperlink, the document opens up as a blank page. Option 2) Open the file for writing using FileShare.None : using (FileStream stream = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.None)) While this file is open, and a user requests the same file for reading in a web browser via a hyperlink, the browser shows an error. In IE 8, you get HTTP 500, "The website cannot display the page", and in Firefox 3.5, you get : "The process cannot access the file because it is being used by another process." The browser behaviour kind of makes sense, and seem reasonable. I guess its highly unlikely that a user will attempt to read a file at exactly the same time you are updating it. It would be nice if somehow, the file update was atomic, like updating a database with SQL wrapped around a transaction. I'm wondering if you guys worry about this sort of thing, and prefer either of the above options, or even have other options of your own for updating files.

    Read the article

  • How can I parse/ transform text log data before it gets captured in SCOM 2007 R2?

    - by Abs
    I'm pretty much a noob with System Center Operations Manager 2007, and I'm probably missing something pretty basic, but I'm stumped anyway. We're setting up monitoring on some of our servers, and we'd like to capture data from some plain text log files (e.g. DNS debug logs, DHCP logs). It looks to me like I can set up a generic text file monitoring rule and get events captured into the main Ops Manager database, but my understanding is that the whole line of text from the plain text log gets captured as one field. In an ideal world, we'd be able to parse or transform that log file data to make it easier to query later. Is this possible? Is it easy? Do I have to buy expensive 3rd-party software to do it? One more thing: it would be even better if there was a way to stuff this data into the Audit Collection Services (ACS) database instead of the main one, but I'll take what I can get. Any help would be greatly appreciated.

    Read the article

  • Printing Pdf using AirPrint causes cut-off content

    - by Jatin Patel
    Here i am printing pdf with size 'pageSize = CGSizeMake(640, 832);'. this size is larget then A4 size page aspected. so i will cut-off some text(means it will not print whole page). while printing same pdf using MAC, it will print whole page with help of option (scale to fit). so can any one help me to come out from this problem.. is there any option in IOS sdk for scale to fit. here is my code.. -(void)printItem { NSArray *aArrPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) ; NSString *aStr = [[aArrPaths objectAtIndex:0] stringByAppendingPathComponent:[NSString stringWithFormat:@"PropertyReport_%d.pdf",self.propertyId]]; // NSString *aStr = [[NSBundle mainBundle] pathForResource:@"TRADUZIONE HELP SECTIONS REV2" ofType:@"pdf"]; NSURL *url=[[NSURL alloc] initFileURLWithPath:aStr]; NSData *data=[[NSData alloc] initWithContentsOfURL:url]; printController = [UIPrintInteractionController sharedPrintController]; if(printController && [UIPrintInteractionController canPrintData:data]) { printController.delegate = self; UIPrintInfo *printInfo = [UIPrintInfo printInfo]; printInfo.outputType = UIPrintInfoOutputGeneral; //printInfo.jobName = [NSString stringWithFormat:@"New Image"]; printInfo.duplex = UIPrintInfoDuplexLongEdge; printController.printInfo = printInfo; printController.showsPageRange = YES; printController.printingItem = data; void (^completionHandler)(UIPrintInteractionController *, BOOL, NSError *) = ^(UIPrintInteractionController *printController, BOOL completed, NSError *error) { if (!completed && error) { //NSLog(@"FAILED! due to error in domain %@ with error code %u", error.domain, error.code); } }; // aWebViewPDF.hidden=FALSE; [printController presentAnimated:YES completionHandler:completionHandler]; } } Thanks Jatin Patel

    Read the article

  • Compile C++ file as objective-c++ using makefile

    - by Vikas
    I'm trying to compile .cpp files as objective-c++ using makefile as few of my cpp file have objective code. I added -x objective-c++ as complier option and started getting stray /327 in program error( and lots of similar error with different numbers after /). The errors are around 200. But when I change the encoding of the file from unicode-8 to 16 the error reduces to 23. currently there is no objective-c++ code in the .cpp file but plan to add in future. When i remove -x objective-c++ from complier option ,everything complies fine. and .out is generated. I would be helpful if someone will tell me why this is happening and even a solution for the same Thanks in advance example of my makefile <code> MACHINE= $(shell uname -s) CFLAGS?=-w -framework CoreServices -framework ApplicationServices -framework CoreFoundation -framework CoreWLAN -framework Cocoa -framework Foundation ifeq ($(MACHINE),Darwin) CCLINK?= -lpthread else CCLINK?= -lpthread -lrt endif DEBUG?= -g -rdynamic -ggdb CCOPT= $(CFLAGS) $(ARCH) $(PROF) CC =g++ -x objective-c++ AR = ar rcs #lib name SLIB_NAME=myapplib EXENAME = myapp.out OBJDIR = build OBJLIB := $(addprefix $(OBJDIR)/... all .o files) SS_OBJ := $(addprefix $(OBJDIR)/,myapp.o ) vpath %.cpp path to my .cpp files INC = include files subsystem: make all $(OBJLIB) : |$(OBJDIR) $(OBJDIR): mkdir $(OBJDIR) $(OBJDIR)/%.o:%.cpp $(CC) -c $(INC) $(CCOPT) $(DEBUG) $(CCLINK) $< -o $@ all: $(OBJLIB) $(CLI_OBJ) $(SS_OBJ) $(AR) lib$(SLIB_NAME).a $(OBJLIB) $(CC) $(INC) $(CCOPT) $(SS_OBJ) $(DEBUG) $(CCLINK) -l$(SLIB_NAME) -L ./ -o $(OBJDIR)/$(EXENAME) clean: rm -rf $(OBJDIR)/* dep: $(CC) -MM *.cpp </code>

    Read the article

  • Vista WHS Client stopped resolving local names

    - by andrewcr
    I’m running Windows Home Server PP2 in my home, with 3 client computers: two XP and one Vista. I have a router that provides my local DHCP and the server has a static IP address. The other day the Vista machine hung, and on reboot stopped resolving local names. It will show the green home server client icon in the system tray, but if I attempt to log in to the console, I get a “This computer cannot connect to your home server” message. If I ping the server name from the command line, it does not resolve, and gives a “could not find host” message. Oddly enough, if I browse the network, I can see the server, but double clicking on it fails. The other machines on the local network have no problems seeing the server, and the Vista machine has no problems resolving names from the internet, it just can’t see any local machines. I’m aware that I can work around this by adding entries to my HOSTS file (it does work), but I’d like this to work the way it’s “supposed” to. I’m an experienced computer user and developer, but not a networking whiz. Can anyone tell me how local name resolution is supposed to work in my environment and/or suggest ways to troubleshoot this? Thanks, Andy

    Read the article

  • Producing 64-bit builds on Windows with free software

    - by pauldoo
    Hi, I have a C++ project that I've been developing in Microsoft Visual C++ 2008 Express Edition. It has come to the point that I'd like to port to 64-bit and continue development. What is the best way to do this using free software? My thoughts so far: The Express Edition of MSVC doesn't come with 64-bit compilers, so I can install the Windows SDK to get these. I could then port my project files to nmake, and use the IDE just as a tool to debug and invoke my nmake scripts.. The downside to this is that nmake looks very poor. The example towards the end of this tutorial suggests that nmake cannot figure out source file dependences itself, and I don't know of anything equivelant to gcc -M that I could use. Another option might be to use vcbuild from the Windows SDK to produce 64-bit builds from my existing vcproj files. Preliminary investigations show that this doesn't really work, as my project files don't have the 64-bit configurations present. (Perhaps I could fudge this by adding the 64-bit configurations to the vcproj files in a text editor.) A final option might be to give up on MSVC, and port my project to the MinGW/MSYS toolchain.

    Read the article

  • Possible DNS issue after a reinstall of Windows Server 2000 (get off my lawn)

    - by cop1152
    I just replaced a drive on a Win2000 Server that replicates AD and issues out DHCP at one of our offices. I successfully joined it to the domain, setup range of IP's, etc, but am still having issues. I cannot RDC to it with name or IP. I can ping it, browse to it with Windows Explorer, and remote to it with some other software, but not RDC. The other issue is this: Users are unable to authenticate on it. They receive the message 'username or password incorrect' (or something like that). Changes made on the main domain controller seem to take forever to trickle down. The most significant entry in the DNS Server Log is Event ID 7062: The DNS Server Encountered a Packet Addressed to Itself. At least, I think its significant. The Directory Services Log shows numerous Event IDs 1265: The attempt to establish a replication link with parameters failed with the following status: The DSA operation is unable to proceed because of a DNS lookup failure. Does this make any sense to anyone? I feel like its something very simple that I am overlooking. Thanks in advance.

    Read the article

  • Configure Cisco Pix 515 with DMZ and no NAT

    - by Rickard
    I hope that someone could shed some light over my situation, as I am fairly new to PIX configurations. I will be getting a new net for my department, which I am going to configure. At my hands, I have a Cisco PIX 515 (not E), a Cisco 2948 switch (and if needed, I can bring up a 2621XM router, but this is my private and not owned by my dept.). The network I will be getting is the following: 10.12.33.0/26 Link net between the ISP routers and my network will be 10.12.32.0/29 where GW is .1 and HSRP roututers are .2 and .3 The ISP has asked me not to NAT the addresses on my side, as they will set it up to give 10.12.33.2 as a one-to-one nat to a public IP. The rest of the IP's will be a many-to-one NAT to another public IP. 10.12.33.2 is supposed to be my server placed on the DMZ, the rest of the IP's will be used for my clients and the AD server (which is currently also acting as a DHCP server in the old network config with another ISP). Now, the question is, how would I best configure this? I mean, am I thinking wrong here, I am expected to put the PIX first from the ISP outlet, then to the switch which will connect my clients. But with the ISP routers being on a different network, how will the firewall forward the packets to the other network, it's a firewall, not a router. I have actually never configured a pix before, and fortunately, this is more like a lab network, not a production network, so if something goes wrong it's not the end of the world, if though annoying. I am not asking for a full configuration from anyone, just some directions, or possibly some links which will give me some hints. Thank you very much!

    Read the article

  • apache2: ssl_error_rx_record_too_long when visiting port 80?

    - by John
    Hi, I have an Ubuntu 10 x64 server edition machine. I got a second IP and configured /etc/network/interfaces like so (actual IPs and gateways removed): auto lo iface lo inet loopback #iface eth0 inet dhcp auto eth0 auto eth0:0 iface eth0 inet static address [ my first IP ] netmask 255.255.255.0 gateway [ my first gateway ] iface eth0:0 inet static address [ my second IP ] netmask 255.255.255.0 gateway [ my second gateway ] /etc/apache2/ports.conf: Listen 80 NameVirtualHost [ my first IP ]:80 NameVirtualHost [ my second IP ]:80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 NameVirtualHost [ my first IP - some site is running SSL successfully using it ]:443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> /etc/apache2/sites-enabled/mysite.conf: <VirtualHost [my second IP ]:80> ServerName mysite.com Include /var/www/mysite.com/djangoproject/apache/django.conf </VirtualHost> Then when visiting http[mysite].com:80 or http[mysite].com (:// removed because serverfault doesn't allow me to post hyperlinks), I get: An error occurred during a connection to [mysite].com. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) My guess is that the configuration file is not being picked up, and apache is therefore looking for the default-ssl file, which is not in conf-enabled. If I were to configure that file properly, it seems I would successfully connect to whatever default directory is specified in the default-ssl file. But I want to connect to my website. Any ideas? Thanks in advance!

    Read the article

  • Setting up a multi-site CMS, collecting thoughts about the DB schema

    - by Ben Fransen
    Hello all, I'm collecting some thoughts about creating a multisite CMS. In my opinion there are two major approaches. All data is stored into 1 database, giving me the advantage of single point of updates; Seperated databases, so each client has its own database. Giving me the advantage to measure bandwith. Option 1 gives me the disadvantage of measuring bandwith while option is giving me the disadvantage of a single point of update structure. Are there any generic approaches for creating a sort of update system? So my clients can download a small package (maybe a zip with a conf file to tell the updatescript where to put all the files and how to extend the database??) Do you guys have some thougths about the best solution for a situation like this? I have my own webserver, full access to all resources and I'm developing in PHP with MySQL as DBMS. I hope to hear from you and I surely appreciate any effort you make to help me further! Greets from Holland, Ben Fransen

    Read the article

  • windows Home Server backup error

    - by domen
    I've finally built my WHS, but some other problems have showed up. I've googled, binged and searched SU with no success. The problem is following: at the moment I've got a Win7 laptop and Win7 PC, which should be backed up by the WHS. Laptop is backed up just fine with no issues, but when I try to manually backup the PC, after "backup is starting" message, when backup service should be monitoring changes on partitions, PC gets disconnected from the home network and thus, the backup process is stuck. Disabling/enabling network adapter gets PC back on the network. The only thing I've tried was reinstalling connector software. no success. Also, I've downloaded connector troubleshooter and only thing it says is "DHCP server was not found". I'm not good with networks, so I couldn't figure out what could that indicate (all computers in the network are assigned static IPs). Any ideas what the problem can be? I can provide any additional information, I'm just not sure what may be helpful right now. Thanks.

    Read the article

  • How to set default date in date_select helper in Rails

    - by brad
    I'm trying to set up a date of birth helper in my Rails app (2.3.5). At present it is like so. <%= f.date_select :date_of_birth, :start_year => Time.now.year - 110, :end_year => Time.now.year %> This generates a perfectly functional set of date fields that work just fine but.... They default to today's date which is not ideal for a date of birth field (I'm not sure what is but unless you're running a neonatal unit today's date seems less than ideal). I want it to read Jan 1 2010 instead (or 2011 or whatever year it happens to be). Using the :default option has proven unsuccessful. I've tried many possibilities including; <%= f.date_select :date_of_birth, :default => {:year => Time.now.year, :month => 'Jan', :day => 1}, :start_year => Time.now.year - 110, :end_year => Time.now.year %> and <%= f.date_select :date_of_birth, :default => Time.local(2010,'Jan',1), :start_year => Time.now.year - 110, :end_year => Time.now.year %> None of this changes the behaviour of the first example. Does the default option actually work as described? It seems that this should be a fairly straightforward thing to do. Ta.

    Read the article

  • Connection details & timeouts in a java web service client

    - by f1sh
    Hello fellow Coders, I have to implement a webservice client to a given WSDL file. I used the SDK's 'wsimport' tool to create Java classes from the WSDL as well as a class that wrap's the webservice's only method (enhanceAddress(auth, param, address)) into a simple java method. So far, so good. The webservice is functional and returning results correcty. The code looks like this: try { EnhancedAddressList uniservResponse = getWebservicePort().enhanceAddress(m_auth, m_param, uniservAddress); //Where the Port^ is the HTTP Soap 1.2 Endpoint }catch (Throwable e) { throw new AddressValidationException("Error during uniserv webservice request.", e); } The Problem now: I need to get Information about the connection and any error that might occur in order to populate various JMX values (such as COUNT_READ_TIMEOUT, COUNT_CONNECT_TIMEOUT, ...) Unfortunately, the method does not officially throw any Exceptions, so in order to get details about a ConnectException, i need to use getCause() on the ClientTransportException that will be thrown. Even worse: I tried to test the read timeout value, but there is none. I changed the service's location in the wsdl file to post the request to a php script that simply waits forever and does not return. Guess what: The web service client does not time out but waits forever as well (I killed the app after 30+ minutes of waiting). That is not an option for my application as i eventually run out of tcp connections if some of them get 'stuck'. The enhanceAddress(auth, param, address) method is not implemented but annotated with javax.jws.* Annotations, meaning that i cannot see/change/inspect the code that is actually executed. Do i have any option but to throw the whole wsimport/javax.jsw-stuff away and implement my own soap client?

    Read the article

  • Perf4J Logging Config Help

    - by manyxcxi
    I currently have a long running process that I am trying to analyze with Perf4J. I currently have it writing results in CSV format to its own log file using the AsyncCoalescingStatisticsAppender and a StatisticsCsvLayout on the file appender. My question is; when I try and use the --graph option from the command line (using the perf4j jar) it isn't populating the data points- it isn't populating anything. Are my appenders set incorrectly? The log file contains hundreds (sometimes thousands) of data points of about 10 different tag names. <appender name="perfAppender" class="org.apache.log4j.FileAppender"> <param name="File" value="perfStats.log"/> <layout class="org.perf4j.log4j.StatisticsCsvLayout"> </layout> </appender> <appender name="CoalescingStatistics" class="org.perf4j.log4j.AsyncCoalescingStatisticsAppender"> <!-- The TimeSlice option is used to determine the time window for which all received StopWatch logs are aggregated to create a single GroupedTimingStatistics log. Here we set it to 10 seconds, overriding the default of 30000 ms --> <param name="TimeSlice" value="10000"/> <appender-ref ref="ConsoleAppender"/> <appender-ref ref="CompositeRollingFileAppender"/> <appender-ref ref="perfAppender"/> </appender>

    Read the article

  • Hubs/switches taking out switches?

    - by Bart Silverstrim
    Here's the issue...we have a network with a lot of Cisco switches. Someone plugged in a hub on the network, and then we started seeing "weird" behavior; errors in communication between clients and servers, or network timeouts, dropping network connections, etc. It seemed that somehow that hub (or SOHO switch) was particularly freaking out our Cisco 3700 series switches. Disconnect that hub or netgear-type SOHO switch and things settled down again. We're in the process of trying to get a centralized logging server for SNMP and management, etc., to see if we can trap errors or narrow down when someone does this sort of thing without our knowledge because things seem to work, for the most part, without issue, we just get freaky oddball incidents on particular switches that don't seem to have any explanation until we find out someone decided to take matters into their own hands to expand available ports in their room. Without getting into procedure changes or locking down ports or "in our organization they'd be fired" answers, can someone explain why adding a small switch or hub, not necessarily a SOHO router (even a dumb hub apparently caused the 3700's to freak out) sending DHCP request out, will cause issues? The boss said it's because the Cisco's are getting confused because that rogue hub/switch is bridging multiple MAC's/IP's into one port on the Cisco switches and they just choke on that, but I thought their routing tables should be able to handle multiple machines coming into the port. Anyone see that behavior before and have a clearer explanation of what's happening? I'd like to know for future troubleshooting and better understanding that just waving my hand and saying "you just can't".

    Read the article

  • Router to WIFI Client to Router (New solution for distance when repeater doesnt help)

    - by Kangarooo
    Ethernet to TL-WR340G with WIFI enabled Using TL-WA500 tried repeater mode which was not good enough and had password problems (could not connect if using either ASCII or Normal password in one way then in repeater worked other way) and also could not forward (repeat) WPA/WPA2 security. So since this repeater can also be as client, I made it as client and used another router (TL-WR740N) to get from wire connection from that client and all was working for a little bit. Every machine is set up to be auto DHCP. 1st when setting up client mode I found it working after doing reset. Then after some tens of minutes internet stopped working. When I removed WiFi client then all went back to normal. Where is the problem and how to make this work? Ethernet- TL-WR340G(AutoDHCP) ==> wifi ==> TL-WA500 TL-WA500 wifi client mode(AutoDHCP) ==> wire ==> TL-WR740N TL-WR740N router mode (AutoDHCP) ==> My Computer In other words: TL-WR340G ) ) ) ) TL-WA500 ===== TL-WR740N ==== PC1 ) ) WiFi === Wire

    Read the article

  • NMock2.0 - how to stub a non interface call?

    - by dferraro
    Hello, I have a class API which has full code coverage and uses DI to mock out all the logic in the main class function (Job.Run) which does all the work. I found a bug in production where we werent doing some validation on one of the data input fields. So, I added a stub function called ValidateFoo()... Wrote a unit test against this function to Expect a JobFailedException, ran the test - it failed obviously because that function was empty. I added the validation logic, and now the test passes. Great, now we know the validation works. Problem is - how do I write the test to make sure that ValidateFoo() is actually called inside Job.Run()? ValidateFoo() is a private method of the Job class - so it's not an interface... Is there anyway to do this with NMock2.0? I know TypeMock supports fakes of non interface types. But changing mock libs right now is not an option. At this point if NMock can't support it, I will simply just add the ValidateFoo() call to the Run() method and test things manually - which obviously I'd prefer not to do considering my Job.Run() method has 100% coverage right now. Any Advice? Thanks very much it is appreciated. EDIT: the other option I have in mind is to just create an integration test for my Job.Run functionality (injecting to it true implementations of the composite objects instead of mocks). I will give it a bad input value for that field and then validate that the job failed. This works and covers my test - but it's not really a unit test but instead an integration test that tests one unit of functionality.... hmm.. EDIT2: IS there any way to do tihs? Anyone have ideas? Maybe TypeMock - or a better design?

    Read the article

  • hosts file seems to be ignored

    - by z4y4ts
    I have almost fresh Ubuntu desktop box. OS was installed two weeks ago and updated from karmic repositories. Last week I had no problems with DNS. But this week something had changed. I'm not sure what and when, and not sure whether I changed any configs. So now I have some really weird situation. According to logs name resolving should work normally. /etc/hosts 127.0.0.1 localhost test 127.0.1.1 desktop /etc/host.conf order hosts,bind multi on /etc/resolv.conf # Generated by NetworkManager search search servers obtained via DHCP nameserver 192.168.0.3 /etc/nsswitch.conf passwd: compat group: compat shadow: compat hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis But if fact it is not. user@test ~ping test PING localhost (127.0.0.1) 56(84) bytes of data. [skip] Pinging is ok. user@test ~host test test.mydomain.com has address xx.xxx.161.201 But pure I suspect that NetworkManager might cause this misbehavior, but don't know where to start to check it. Any thoughts, suggestions?

    Read the article

  • 3 Servers, is this is a cluster?

    - by Andy Barlow
    Hello, At the moment I have one Ubuntu server, 9.10, running with a simple Samba share, a mail server, DNS server and DHCP server. Mostly its just there for file sharing and email server. I also have 2 other servers that are exactly the same hardware and spec as the first, which have an rsync set up to retrieve the shared folders and backs them up. However, if the first server goes down, all of our shares disappear along with our mail and the system must be rebuilt. Also I tend to find if people are downloading a large amount from the file server, no-one can access there emails - especially in the morning when everyone is signing in at once. Would it be more beneficial for me to have all 3 servers, all running the same services, doing the same thing with some sort of cluster with load balancing? I'm not really sure where to begin looking, or how to go about such a setup where 3 servers are all identical, but perhaps one acts as the main load balancer?? If someone can point me in the right direction, or if this simply sounds like one of those Enterprise Cloud's that is now a default setup in Ubuntu Server 9.10+, then I'll go down that route. Cheers in advance. Andy

    Read the article

  • How to make Doxygen ignore specific PHP functions, when generating documentation from a purely proce

    - by Senthil
    I am writing a PHP Library and I am trying out Doxygen to generate the API documentation. My library does not use OOP. All code is procedural. I use lot of helper functions which have an _ (underscore) prefix in their names. They are not part of the publicly exposed API. They are just used internally. Even though they are commented just like the API functions, I don't want them included when giving out the documentation for the API. I want Doxygen to ignore these functions. I can think of two solutions for this, but I am not able to implement either one of them. First is, I can set some configuration in Doxygen to make it ignore specific function name patterns. I went through Doxygen help documentation and searched the web. There seems to be options to ignore file and folder name patterns. But I am not able to find an option to specify a function name pattern and make it ignore those functions. Second is, along with all the other content in the comments above functions, I could add some other keyword or something and make Doxygen ignore those functions. I haven't been able to find out how to do that either. How can I make Doxygen ignore specific PHP functions when generating documentation? Update I searched within Stack Overflow and came across this question. It looked similar to my question. I found out about EXCLUDE_SYMBOLS config option in one of the answers. You can use that to exclude function names too. More importantly, wildcards were supported. So I am able to ignore all my functions with _ as the prefix :) This ridiculous! I should've done more research :| Someone please delete this question or add this answer as an answer.

    Read the article

  • Writing re-entrant lexer with Flex

    - by Viet
    I'm newbie to flex. I'm trying to write a simple re-entrant lexer/scanner with flex. The lexer definition goes below. I get stuck with compilation errors as shown below (yyg issue): reentrant.l: /* Definitions */ digit [0-9] letter [a-zA-Z] alphanum [a-zA-Z0-9] identifier [a-zA-Z_][a-zA-Z0-9_]+ integer [0-9]+ natural [0-9]*[1-9][0-9]* decimal ([0-9]+\.|\.[0-9]+|[0-9]+\.[0-9]+) %{ #include <stdio.h> #define ECHO fwrite(yytext, yyleng, 1, yyout) int totalNums = 0; %} %option reentrant %option prefix="simpleit_" %% ^(.*)\r?\n printf("%d\t%s", yylineno++, yytext); %% /* Routines */ int yywrap(yyscan_t yyscanner) { return 1; } int main(int argc, char* argv[]) { yyscan_t yyscanner; if(argc < 2) { printf("Usage: %s fileName\n", argv[0]); return -1; } yyin = fopen(argv[1], "rb"); yylex(yyscanner); return 0; } Compilation errors: vietlq@mylappie:~/Desktop/parsers/reentrant$ gcc lex.simpleit_.c reentrant.l: In function ‘main’: reentrant.l:44: error: ‘yyg’ undeclared (first use in this function) reentrant.l:44: error: (Each undeclared identifier is reported only once reentrant.l:44: error: for each function it appears in.)

    Read the article

  • .NET regex: Match.nextMatch() never returns

    - by Jimmy
    I have a regex that seems to have worked fine for the past year or so, and all of a sudden today with a new slightly different text to match against, Match.nextMatch() never returns. I'm no regex expert and I'm sure the regex can be optimized, but previous data sets weren't much more complex than what I've tried today. Furthermore, the regex works fine against the offending data set in a tool like RegexBuddy; it's only in .net (running in debug in Visual Studio) that it seems to hang. Nevertheless, if anyone can figure out how to tweak the regex to make it work, I'd really appreciate it. This is the regex: <tr>(<td[^>]*><a[^>]*>(?<callOptionTicker>[A-Z]{1,5}\d{6}C\d{8})</a></td>)(<td[^>]*>.*?</td>){6}(<td[^>]*><b><a[^>]*>(?<strikePrice>\d*\.\d*)</a></b></td>)(<td[^>]*><a[^>]*>(?<putOptionTicker>[A-Z]{1,5}\d{6}P\d{8})</a></td>) It's meant to extract put and call option tickers from a Yahoo option chain page (i.e., raw HTML). It works fine for IBM http://finance.yahoo.com/q/os?s=IBM&m=2010-05-21 It doesn't work for SPX options (this is the offending data set) http://finance.yahoo.com/q/os?s=I:SPX.W&m=2010-05

    Read the article

  • Setting the value of a radio button with JQuery (ASP.NET MVC)

    - by Mario
    I have 2 "lists" of (4) radio buttons {value = "0", "1", "2", "3"} in all lists. The first list needs to drive the second list (which may occur multiple times). So if an option is selected in the first list, the SAME option is selected in the second list. It sounds weird, but it has a purpose. The first list is more of a "select all" type of list, and the second list is an individual selection list(s). I have the value of the "selectall" list via JQuery: $("#selectalllist").change(function() { var str = $('input[name=selectall]:checked').val(); $(".radiolist").attr('checked', str); //This is my thinking }); The current selected value is stored in the str variable, and that is seemingly correct (I used alert to tell me what is stored, and it comes up what I expect). I just need to set the value of the next set of lists. The second line of the code I thought would set the checked value of $(".radiolist") to the stored value, but it ALWAYS set the value to the last entry (value = "3"). I'm wondering if I have the wrong attribute in the .attr('', str) call? Any thoughts?

    Read the article

  • List of objects or parallel arrays of properties?

    - by Headcrab
    The question is, basically: what would be more preferable, both performance-wise and design-wise - to have a list of objects of a Python class or to have several lists of numerical properties? I am writing some sort of a scientific simulation which involves a rather large system of interacting particles. For simplicity, let's say we have a set of balls bouncing inside a box so each ball has a number of numerical properties, like x-y-z-coordinates, diameter, mass, velocity vector and so on. How to store the system better? Two major options I can think of are: to make a class "Ball" with those properties and some methods, then store a list of objects of the class, e. g. [b1, b2, b3, ...bn, ...], where for each bn we can access bn.x, bn.y, bn.mass and so on; to make an array of numbers for each property, then for each i-th "ball" we can access it's 'x' coordinate as xs[i], 'y' coordinate as ys[i], 'mass' as masses[i] and so on; To me it seems that the first option represents a better design. The second option looks somewhat uglier, but might be better in terms of performance, and it could be easier to use it with numpy and scipy, which I try to use as much as I can. I am still not sure if Python will be fast enough, so it may be necessary to rewrite it in C++ or something, after initial prototyping in Python. Would the choice of data representation be different for C/C++? What about a hybrid approach, e.g. Python with C++ extension?

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >