Search Results

Search found 5842 results on 234 pages for 'compiler warnings'.

Page 135/234 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Scope quandary with namespaces, function templates, and static data

    - by Adrian McCarthy
    This scoping problem seems like the type of C++ quandary that Scott Meyers would have addressed in one of his Effective C++ books. I have a function, Analyze, that does some analysis on a range of data. The function is called from a few places with different types of iterators, so I have made it a template (and thus implemented it in a header file). The function depends on a static table of data, AnalysisTable, that I don't want to expose to the rest of the code. My first approach was to make the table a static const inside Analysis. namespace MyNamespace { template <typename InputIterator> int Analyze(InputIterator begin, InputIterator end) { static const int AnalysisTable[] = { /* data */ }; ... // implementation uses AnalysisTable return result; } } // namespace MyNamespace It appears that the compiler creates a copy of AnalysisTable for each instantiation of Analyze, which is wasteful of space (and, to a small degree, time). So I moved the table outside the function like this: namespace MyNamespace { const int AnalysisTable[] = { /* data */ }; template <typename InputIterator> int Analyze(InputIterator begin, InputIterator end) { ... // implementation uses AnalysisTable return result; } } // namespace MyNamespace There's only one copy of the table now, but it's exposed to the rest of the code. I'd rather keep this implementation detail hidden, so I introduced an unnamed namespace: namespace MyNamespace { namespace { // unnamed to hide AnalysisTable const int AnalysisTable[] = { /* data */ }; } // unnamed namespace template <typename InputIterator> int Analyze(InputIterator begin, InputIterator end) { ... // implementation uses AnalysisTable return result; } } // namespace MyNamespace But now I again have multiple copies of the table, because each compilation unit that includes this header file gets its own. If Analyze weren't a template, I could move all the implementation detail out of the header file. But it is a template, so I seem stuck. My next attempt was to put the table in the implementation file and to make an extern declaration within Analyze. // foo.h ------ namespace MyNamespace { template <typename InputIterator> int Analyze(InputIterator begin, InputIterator end) { extern const int AnalysisTable[]; ... // implementation uses AnalysisTable return result; } } // namespace MyNamespace // foo.cpp ------ #include "foo.h" namespace MyNamespace { const int AnalysisTable[] = { /* data */ }; } This looks like it should work, and--indeed--the compiler is satisfied. The linker, however, complains, "unresolved external symbol AnalysisTable." Drat! (Can someone explain what I'm missing here?) The only thing I could think of was to give the inner namespace a name, declare the table in the header, and provide the actual data in an implementation file: // foo.h ----- namespace MyNamespace { namespace PrivateStuff { extern const int AnalysisTable[]; } // unnamed namespace template <typename InputIterator> int Analyze(InputIterator begin, InputIterator end) { ... // implementation uses PrivateStuff::AnalysisTable return result; } } // namespace MyNamespace // foo.cpp ----- #include "foo.h" namespace MyNamespace { namespace PrivateStuff { const int AnalysisTable[] = { /* data */ }; } } Once again, I have exactly one instance of AnalysisTable (yay!), but other parts of the program can access it (boo!). The inner namespace makes it a little clearer that they shouldn't, but it's still possible. Is it possible to have one instance of the table and to move the table beyond the reach of everything but Analyze?

    Read the article

  • Spurious alleged file corruption with an SSD

    - by Johannes Rössel
    Recently my Laptop sometimes warns about corrupted files on the hard drive (Samsung SSD PB22-JS3 TM). This has only happened so far when updating (or checking out) an SVN repository with either TortoiseSVN or the command line Subversion client. The fun thing is that the corrupted file has always been a .svn directory (although the directory entry may contain files in that directory too, if they're small enough?—?which should be the case with SVN). However, when looking into the warned-about directory I notice nothing strange or unusual and don't get any more warnings about it and another try (SVN stops updating once that error occurs?—?TortoiseSVN even with an appropriate error message) of updating the working copy works (well, mostly; sometimes it does it again, albeit with a different directory). Since the laptop is only a few months old I doubt the SSD is failing already—five months of normal usage shouldn't be too surprising. Also it (so far) occurred only with SVN updates on a large repository. Maybe that's too many writes in a short time and some part between the software and the hardware doesn't quite catch up fast enough or so?—?I don't know enough about this to actually make an informed guess here. Anyone knows what's up here? ETA: Note to add: I've run chkdsk (it seems to schedule itself anyway when this happens) and it didn't find anything out of the ordinary.

    Read the article

  • How to run MS C++ 6.0 on Windows 7

    - by hotei
    I have MS Windows C++ version 6.0 on XP. I'd like to move it to a Windows 7 platform but when I try to install it there I get some garbage about it not being compatible, proceed at your own risk etc. When I proceed, it (not surprisingly) doesn't work. Is there a way to convince these Microsoft tools to play nice with each other? I have Win7 home edition, but I would be willing to upgrade to Win7 Pro IF I knew it would work under the "XP emulation" mode. Failing both those options, what is the least expensive "upgrade" path for C++? I don't need a bunch of other junk, just the C++ compiler. The goal is to retire my XP system since currently the only reason I keep it is to compile C++ programs that eventually are run under Win7. Thanks, Hotei

    Read the article

  • iPhone SDK / Objective C Syntax Question

    - by Koppo
    To all, I was looking at the sample project from http://iphoneonrails.com/ and I saw they took the NSObject class and added methods to it for the SOAP calls. Their sample project can be downloaded from here http://iphoneonrails.com/downloads/objective_resource-1.01.zip. My question is related to my lack of knowledge on the following syntax as I haven't seen it yet in a iPhone project. There is a header file called NSObject+ObjectiveResource.h where they declare and change NSObject to have extra methods for this project. Is the "+ObjectiveResource.h" in the name there a special syntax or is that just the naming convention of the developers. Finally inside the class NSObject we have the following #import <Foundation/Foundation.h> @interface NSObject (ObjectiveResource) // Response Formats typedef enum { XmlResponse = 0, JSONResponse, } ORSResponseFormat; // Resource configuration + (NSString *)getRemoteSite; + (void)setRemoteSite:(NSString*)siteURL; + (NSString *)getRemoteUser; + (void)setRemoteUser:(NSString *)user; + (NSString *)getRemotePassword; + (void)setRemotePassword:(NSString *)password; + (SEL)getRemoteParseDataMethod; + (void)setRemoteParseDataMethod:(SEL)parseMethod; + (SEL) getRemoteSerializeMethod; + (void) setRemoteSerializeMethod:(SEL)serializeMethod; + (NSString *)getRemoteProtocolExtension; + (void)setRemoteProtocolExtension:(NSString *)protocolExtension; + (void)setRemoteResponseType:(ORSResponseFormat) format; + (ORSResponseFormat)getRemoteResponseType; // Finders + (NSArray *)findAllRemote; + (NSArray *)findAllRemoteWithResponse:(NSError **)aError; + (id)findRemote:(NSString *)elementId; + (id)findRemote:(NSString *)elementId withResponse:(NSError **)aError; // URL construction accessors + (NSString *)getRemoteElementName; + (NSString *)getRemoteCollectionName; + (NSString *)getRemoteElementPath:(NSString *)elementId; + (NSString *)getRemoteCollectionPath; + (NSString *)getRemoteCollectionPathWithParameters:(NSDictionary *)parameters; + (NSString *)populateRemotePath:(NSString *)path withParameters:(NSDictionary *)parameters; // Instance-specific methods - (id)getRemoteId; - (void)setRemoteId:(id)orsId; - (NSString *)getRemoteClassIdName; - (BOOL)createRemote; - (BOOL)createRemoteWithResponse:(NSError **)aError; - (BOOL)createRemoteWithParameters:(NSDictionary *)parameters; - (BOOL)createRemoteWithParameters:(NSDictionary *)parameters andResponse:(NSError **)aError; - (BOOL)destroyRemote; - (BOOL)destroyRemoteWithResponse:(NSError **)aError; - (BOOL)updateRemote; - (BOOL)updateRemoteWithResponse:(NSError **)aError; - (BOOL)saveRemote; - (BOOL)saveRemoteWithResponse:(NSError **)aError; - (BOOL)createRemoteAtPath:(NSString *)path withResponse:(NSError **)aError; - (BOOL)updateRemoteAtPath:(NSString *)path withResponse:(NSError **)aError; - (BOOL)destroyRemoteAtPath:(NSString *)path withResponse:(NSError **)aError; // Instance helpers for getting at commonly used class-level values - (NSString *)getRemoteCollectionPath; - (NSString *)convertToRemoteExpectedType; //Equality test for remote enabled objects based on class name and remote id - (BOOL)isEqualToRemote:(id)anObject; - (NSUInteger)hashForRemote; @end What is the "ObjectiveResource" in the () mean for NSObject? What is that telling Xcode and the compiler about what is happening..? After that things look normal to me as they have various static and instance methods. I know that by doing this all user classes that inherit from NSObject now have all the extra methods for this project. My question is what is the parenthesis are doing after the NSObject. Is that referencing a header file, or is that letting the compiler know that this class is being over ridden. Thanks again and my apologies ahead of time if this is a dumb question but just trying to learn what I lack.

    Read the article

  • Email test deferred (mail transport unavailable) with ClamAV

    - by dirt
    I'm trying to set up a simple new mail server; when I send a test email to the server the email is getting hung up during delivery (user mapping is found) and the email is never found in /home/user/Maildir/new Here is my maillog after a fresh reboot and test email, there are a few warnings I am unfamiliar with. Can you please point me in the right direction? Oct 25 14:54:57 loki dovecot: master: Dovecot v2.0.9 starting up (core dumps disabled) Oct 25 14:54:58 loki postfix/postfix-script[1369]: starting the Postfix mail system Oct 25 14:54:58 loki postfix/master[1370]: daemon started -- version 2.6.6, configuration /etc/postfix Oct 25 14:56:00 loki postfix/tlsmgr[1457]: warning: request to update table btree:/etc/postfix/smtpd_scache in non-postfix directory /etc/postfix Oct 25 14:56:00 loki postfix/tlsmgr[1457]: warning: redirecting the request to postfix-owned data_directory /var/lib/postfix Oct 25 14:56:00 loki postfix/smtpd[1455]: connect from mail-ob0-f180.google.com[209.85.214.180] Oct 25 14:56:01 loki postfix/smtpd[1455]: 1CF5E20A8B: client=mail-ob0-f180.google.com[209.85.214.180] Oct 25 14:56:01 loki postfix/cleanup[1461]: 1CF5E20A8B: message-id= Oct 25 14:56:01 loki postfix/qmgr[1379]: 1CF5E20A8B: from=, size=1788, nrcpt=1 (queue active) Oct 25 14:56:01 loki postfix/qmgr[1379]: warning: connect to transport private/scan: No such file or directory Oct 25 14:56:01 loki postfix/error[1462]: 1CF5E20A8B: to=, orig_to=, relay=none, delay=0.18, delays=0.15/0.02/0/0.01, dsn=4.3.0, status=deferred (mail transport unavailable) Oct 25 14:56:01 loki postfix/smtpd[1455]: disconnect from mail-ob0-f180.google.com[209.85.214.180] master.cf snippets: # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - n - - smtpd submission inet n - n - - smtpd -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING smtps inet n - n - - smtpd -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING scan unix - - n - 16 smtp -o smtp_data_done_timeout=1200 -o smtp_send_xforward_command=yes -o disable_dns_lookups=yes 127.0.0.1:10026 inet n - n - 16 smtpd -o content_filter= -o local_recipient_maps= -o relay_recipient_maps= -o smtpd_restriction_classes= -o smtpd_client_restrictions= -o smtpd_helo_restrictions= -o smtpd_sender_restrictions= -o smtpd_recipient_restrictions=permit_mynetworks,reject -o mynetworks_style=host -o smtpd_authorized_xforward_hosts=127.0.0.0/8

    Read the article

  • Amazon EC2 instance was not available for few minutes (amazon showed that everything ok)

    - by Salvador Dali
    Few minutes ago my amazon Ec2 instance was unavailable for a few minutes. During this time neither I was able to connect to web-site with http, nor I was able to ssh to it. Also I was not able to connect to my amazon management console for some time (less than amount of unavailability of my instance). When I was able to connect to management console, it was showing me that everything is running smoothly (but I still was not able to connect to instance in any way for a minute or two). During this time I have checked their status page just to see that there is no issues (my instance is in Ireland and there is nothing wrong there today). After that I was able to log in. I checked my logins with last to see that no one except me was logging in. I also looked in apache logs and there was no errors or warnings during this time. Right now when I see my amazon monitor, I see a small spike in CPU in last 15 minutes (but this is from 10% to like 20%) I have no idea what can it be (I have never experienced anything like this before) and therefore I have no idea how scared should I be or what else should I look for. Can anyone give me a hint what my actions should be in such situation?

    Read the article

  • Using ZFS or XFS on a Xen guest running Linux

    - by zoot
    Background: I'm investigating the viability of using a filesystem other than ext3/4, with the ability to run snapshots for backup and rollback purposes. The servers under consideration are mailbox server nodes running on Linode's Xen based VPS platform. I'm particularly drawn to the various published benefits which ZFS offers in terms of data integrity and this year's stable release of native ZFS support in Linux - http://zfsonlinux.org ZFS appears to be the more thorough option in terms of benefits and simplicity (instead of LVM+XFS). Please note that I have little experience with ZFS (which I use on a local FreeNAS installation) and none with XFS, hence the post. To date, my servers are using ext3 filesystems, not managed under LVM. Question in detail: So, I have two questions. (1) Which of the two filesystems would be the better choice for the best of all of the following 3 aspects, running on a Xen Linux guest? Snapshots Data Integrity Performance (2) If ZFS is a viable option, is it practical to use ZRAID across Xen disk images to further enhance the solution for data integrity? Note: I'm reluctant to consider btrfs, given the many warnings I've read about in using it on production systems.

    Read the article

  • MYSQL - Multiple set values in one update statement [migrated]

    - by Maurzank
    MYSQL - MULTIPLE SET VALUES IN ONE UPDATE STATEMENT USING 2 TABLES AS REFERENCE AND STORING VALUES IN ONE OF THOSE TABLES WITH A SPECIFIC LOGIC. Hello people, A problem came up by making an UPDATE. The example issue is as follows: CURRENUSRTABLE +------------+-------+ | ID | STATE | +------------+-------+ | 123 | 3 | | 456 | 3 | | 789 | 3 | +------------+-------+ HISTORYTABLE +------------+------------+-----+ | ID | TRDATE | ACT | +------------+------------+-----+ | 123 | 2013-11-01 | 5 | | 456 | 2013-11-01 | 5 | | 789 | 2013-11-01 | 5 | | 123 | 2013-11-02 | 4 | | 456 | 2013-11-02 | 4 | | 789 | 2013-11-02 | 4 | | 123 | 2013-11-03 | 3 | | 456 | 2013-11-03 | 3 | | 789 | 2013-11-03 | 3 | +------------+------------+-----+ I'm using these variables: @BA=3, @DE=5, @BL=4, What I'm trying to do is an update on CURRENUSRTABLE.STATE using HISTORYTABLE.ACT with the following logic: STATE value will be updated as ACT value, except when STATE value is 4 and ACT is 3, then STATE will be 5 I made this statement: UPDATE CURRENUSRTABLE RIGHT OUTER JOIN HISTORYTABLE ON HISTORYTABLE.ID=CURRENUSRTABLE.ID SET CURRENUSRTABLE.STATE= ( SELECT CASE HISTORYTABLE.ACT WHEN @DE THEN @DE WHEN @BL THEN @BL WHEN @BA THEN CASE CURRENUSRTABLE.STATE WHEN @BL THEN @DE ELSE @BA END END ORDER BY HISTORYTABLE.TRDATE,FIELD(HISTORYTABLE.ACT,@DE,@BL,@BA) ) WHERE HISTORYTABLE.TRDATE BETWEEN '2013-11-01' AND '2013-11-01' I'm intentionally using "RIGHT OUTER JOIN" and "HISTORYTABLE.TRDATE BETWEEN" because I'd like to change the values in CURRENUSRTABLE using a timeframe of more than one day. If I execute this statement many times using only one day (i.e. "BETWEEN '2013-11-01' AND '2013-11-01'" and then "BETWEEN '2013-11-02' AND '2013-11-02'"... etc ) it works perfectly, but if it is executed using the dates "BETWEEN '2013-11-01' AND '2013-11-03'" the results on CURRENUSRTABLE.STATE are 3, which is wrong, it should be 5. I think the problem relies on "CASE CURRENUSRTABLE.STATE" when uses "HISTORYTABLE.TRDATE BETWEEN '2013-11-01' AND '2013-11-03'", because it reads the STATE 9 times which has not been commited yet until the statement ends. Query OK, 9 rows affected (0.00 sec) Rows matched: 9 Changed: 9 Warnings: 0 Maybe the solution is very simple, but unfortunately I've not much practice on MySQL since I've worked with it less than 2 months :) Is there any suggestions to solve this issue? PD: MySQL version is 4.1.22, I know is very old an EOL, unfortunately I have to make these statements on this version. Thanks!

    Read the article

  • Setup a new domain controller over a temporary VPN, but now Windows delays startup?

    - by Kris Anderson
    I'm migrating servers from colo locations to Amazon's VPC EC2 instances. If anyone hasn't worked with Amazon VPC before, VPN is a pain in the arse! Anyways, I setup a new server that acts as the domain controller for our Amazon VPC. In order to migrate all the user accounts from our existing domain controllers I manually connected to our colo VPN using my user account on the new Amazon EC2 machine. I was able to join the domain and the new Amazon server became another domain controller on our network. So far so good. The problem I'm having is that when booting the EC2 domain controller (which is no longer connected to the VPN so it can't communicate with the existing controllers), it takes a good 6-8 minuted before I can remote into the server (instead of the 1-2 minutes it should take). Also, during this time most of the services we also run (like IIS) also give 404 errors until the 6-8 minutes have passed. It's almost like the domain controller is attempting to reach the other domain controllers first and after 6-8 minutes it falls back to the one located on the local machine? I don't think that's what's happening though, because Server 2008 R2 doesn't have primary and backup domain controllers. They're all equal as far as Windows is concerned. For my network adapter I have only one DNS listed, 127.0.0.1, so it should be looking up the local domain controller and not the other domain controllers it connected to over VPN when VPN was enabled. In the server logs I'm seeing these warnings pop up during a reboot: The winlogon notification subscriber is taking long time to handle the notification event (CreateSession). The winlogon notification subscriber took 409 second(s) to handle the notification event (CreateSession). Any ideas on what's happening here? I would try removing the existing domain controllers from the new Amazon EC2 machine, but I still need to connect over VPN a few times to migrate some data between the servers, and I don't want that change being reflected back to the other domain controllers in our colo locations.

    Read the article

  • Thread Synchronisation 101

    - by taspeotis
    Previously I've written some very simple multithreaded code, and I've always been aware that at any time there could be a context switch right in the middle of what I'm doing, so I've always guarded access the shared variables through a CCriticalSection class that enters the critical section on construction and leaves it on destruction. I know this is fairly aggressive and I enter and leave critical sections quite frequently and sometimes egregiously (e.g. at the start of a function when I could put the CCriticalSection inside a tighter code block) but my code doesn't crash and it runs fast enough. At work my multithreaded code needs to be a tighter, only locking/synchronising at the lowest level needed. At work I was trying to debug some multithreaded code, and I came across this: EnterCriticalSection(&m_Crit4); m_bSomeVariable = true; LeaveCriticalSection(&m_Crit4); Now, m_bSomeVariable is a Win32 BOOL (not volatile), which as far as I know is defined to be an int, and on x86 reading and writing these values is a single instruction, and since context switches occur on an instruction boundary then there's no need for synchronising this operation with a critical section. I did some more research online to see whether this operation did not need synchronisation, and I came up with two scenarios it did: The CPU implements out of order execution or the second thread is running on a different core and the updated value is not written into RAM for the other core to see; and The int is not 4-byte aligned. I believe number 1 can be solved using the "volatile" keyword. In VS2005 and later the C++ compiler surrounds access to this variable using memory barriers, ensuring that the variable is always completely written/read to the main system memory before using it. Number 2 I cannot verify, I don't know why the byte alignment would make a difference. I don't know the x86 instruction set, but does mov need to be given a 4-byte aligned address? If not do you need to use a combination of instructions? That would introduce the problem. So... QUESTION 1: Does using the "volatile" keyword (implicity using memory barriers and hinting to the compiler not to optimise this code) absolve a programmer from the need to synchronise a 4-byte/8-byte on x86/x64 variable between read/write operations? QUESTION 2: Is there the explicit requirement that the variable be 4-byte/8-byte aligned? I did some more digging into our code and the variables defined in the class: class CExample { private: CRITICAL_SECTION m_Crit1; // Protects variable a CRITICAL_SECTION m_Crit2; // Protects variable b CRITICAL_SECTION m_Crit3; // Protects variable c CRITICAL_SECTION m_Crit4; // Protects variable d // ... }; Now, to me this seems excessive. I thought critical sections synchronised threads between a process, so if you've got one you can enter it and no other thread in that process can execute. There is no need for a critical section for each variable you want to protect, if you're in a critical section then nothing else can interrupt you. I think the only thing that can change the variables from outside a critical section is if the process shares a memory page with another process (can you do that?) and the other process starts to change the values. Mutexes would also help here, named mutexes are shared across processes, or only processes of the same name? QUESTION 3: Is my analysis of critical sections correct, and should this code be rewritten to use mutexes? I have had a look at other synchronisation objects (semaphores and spinlocks), are they better suited here? QUESTION 4: Where are critical sections/mutexes/semaphores/spinlocks best suited? That is, which synchronisation problem should they be applied to. Is there a vast performance penalty for choosing one over the other? And while we're on it, I read that spinlocks should not be used in a single-core multithreaded environment, only a multi-core multithreaded environment. So, QUESTION 5: Is this wrong, or if not, why is it right? Thanks in advance for any responses :)

    Read the article

  • Can the STREAM and GUPS (single CPU) benchmark use non-local memory in NUMA machine

    - by osgx
    Hello I want to run some tests from HPCC, STREAM and GUPS. They will test memory bandwidth, latency, and throughput (in term of random accesses). Can I start Single CPU test STREAM or Single CPU GUPS on NUMA node with memory interleaving enabled? (Is it allowed by the rules of HPCC - High Performance Computing Challenge?) Usage of non-local memory can increase GUPS results, because it will increase 2- or 4- fold the number of memory banks, available for random accesses. (GUPS typically limited by nonideal memory-subsystem and by slow memory bank opening/closing. With more banks it can do update to one bank, while the other banks are opening/closing.) Thanks. UPDATE: (you may nor reorder the memory accesses that the program makes). But can compiler reorder loops nesting? E.g. hpcc/RandomAccess.c /* Perform updates to main table. The scalar equivalent is: * * u64Int ran; * ran = 1; * for (i=0; i<NUPDATE; i++) { * ran = (ran << 1) ^ (((s64Int) ran < 0) ? POLY : 0); * table[ran & (TableSize-1)] ^= stable[ran >> (64-LSTSIZE)]; * } */ for (j=0; j<128; j++) ran[j] = starts ((NUPDATE/128) * j); for (i=0; i<NUPDATE/128; i++) { /* #pragma ivdep */ for (j=0; j<128; j++) { ran[j] = (ran[j] << 1) ^ ((s64Int) ran[j] < 0 ? POLY : 0); Table[ran[j] & (TableSize-1)] ^= stable[ran[j] >> (64-LSTSIZE)]; } } The main loop here is for (i=0; i<NUPDATE/128; i++) { and the nested loop is for (j=0; j<128; j++) {. Using 'loop interchange' optimization, compiler can convert this code to for (j=0; j<128; j++) { for (i=0; i<NUPDATE/128; i++) { ran[j] = (ran[j] << 1) ^ ((s64Int) ran[j] < 0 ? POLY : 0); Table[ran[j] & (TableSize-1)] ^= stable[ran[j] >> (64-LSTSIZE)]; } } It can be done because this loop nest is perfect loop nest. Is such optimization prohibited by rules of HPCC?

    Read the article

  • Why am I getting blank error messages in my Apache error log?

    - by Jason Lamoreux
    I am running Apache 2.2 on 64bit Windows Server 2008 Std edition with ActivePerl 5.8.9. My error log is filling up with blank error messages like these: [Wed Mar 31 14:08:31 2010] [error] [client 10.6.1.164] [Wed Mar 31 14:10:32 2010] [error] [client 10.6.1.89] [Wed Mar 31 14:13:20 2010] [error] [client 10.6.1.131] By looking in the access log I can tell that it occurs when our client machines issue a GET to a very simple Perl script. #!perl.exe use strict; no warnings; $|=1; use CGI::Carp('fatalsToBrowser'); use CGI qw(:standard); print header; my $CRLF = "\r\n<br>"; my $Port = '10116'; print "Success!${CRLF}PollInterval=5${CRLF}LMProMode${CRLF}Version=7${CRLF}ConnectionPort=$Port"; exit; The weird thing is that it does not look like this error message is inserted every time a GET to this Perl script occurs. What could cause this error message to appear in the Apache error log?

    Read the article

  • Backup to disk, encrypted, without any installed local software

    - by user30064
    Hi, Ok, this is a tough one, and it might not even be possible, but no harm in asking I guess. I have a Buffalo Terastation file server that I use for network attached storage. After a couple of phone calls to customer services I realised that there is no way to backup to disk encrypted. In effect, I would be carrying unencrypted company data off-site daily, which is obviously unacceptable. I had a go at TrueCrypt, EncFS, and a few others, and as far as I could see all of them required that you install some software on the machine that is to use the file system, which makes sense. Unfortunately the firmware on the Terastation is closed and I cannot install any software (and I can't build from source either, since Buffalo didn't include a compiler). Are there any ways to copy files to disk, where as soon as they are written to the disk they are transparently encrypted, without having to install additional software? I'm not sure it matters too much, but the Terastation firmware is Linux based, although as I mentioned, closed. Many thanks, Andreas

    Read the article

  • Dell PERC 5 - RAID-10 keeps rebuilding drive 2 every day

    - by raid question
    I have a Dell PowerEdge 2950 with this card: RAID bus controller [0104]: Dell PowerEdge Expandable RAID controller 5 [1028:0015] and six disks in a RAID-10. I replaced drive 2, because it didn't show up, and then it started to rebuild itself: root@backup01:~# megaraidsas-status -- Arrays informations -- -- ID | Type | Size | Status a0d0 | RAID 10 | 5587GiB | DEGRADED -- Disks informations -- ID | Model | Status | Warnings a0e8s0 | ATA ST2000DM001-9YN1 1863GiB | online | errs: media:0 other:5393 a0e8s1 | ATA ST2000DM001-9YN1 1863GiB | online | errs: media:0 other:5394 a0e8s2 | ATA ST2000DM001-1E61 1863GiB | rebuild | errs: media:0 other:99 a0e8s3 | ATA ST2000DM001-9YN1 1863GiB | online | errs: media:0 other:5393 a0e8s4 | ATA ST2000DM001-9YN1 1863GiB | online | errs: media:0 other:5393 a0e8s5 | ATA ST2000DM001-9YN1 1863GiB | online | errs: media:0 other:5393 The rebuild finishes, then the virtual drive becomes optimal, and drive 2 goes online. Then once a day, drive 2 acts like it's been removed, and the rebuild starts all over again. How do I make this once a day rebuild stop? Event Description: Removed: PD 02(e1/s2) Event Description: Removed: PD 02(e1/s2) Info: enclPd=08, scsiType=0, portMap=04, sasAddr=1221000002000000,0000000000000000 Event Description: State change on VD 00/0 from OPTIMAL(3) to DEGRADED(2) Event Description: VD 00/0 is now DEGRADED1 Event Description: State change on PD 02(e1/s2) from ONLINE(18) to FAILED(11) Event Description: State change on PD 02(e1/s2) from FAILED(11) to UNCONFIGURED_BAD(1) Event Description: Background Initialization failed on VD 00/0 Event Description: Inserted: PD 02(e1/s2) Event Description: Inserted: PD 02(e1/s2) Info: enclPd=08, scsiType=0, portMap=04, sasAddr=1221000002000000,0000000000000000 Event Description: PD 02(e1/s2) is not a certified drive Event Description: State change on PD 02(e1/s2) Event Description: State change on PD 02(e1/s2) from UNCONFIGURED_GOOD(0) to OFFLINE(10) from UNCONFIGURED_BAD(1) to UNCONFIGURED_GOOD(0) Event Description: Rebuild automatically started on PD 02(e1/s2) Event Description: State change on PD 02(e1/s2) from OFFLINE(10) to REBUILD(14)

    Read the article

  • Nightmare: Upgrading Tomcat 5.5 to 6.0

    - by pavanlimo
    I'm trying to upgrade a perfectly running embedded Tomcat 5.5 to Tomcat 6.0. I understand that all I need to do is replace Tomcat 5.5 jars with 6.0. That's what I did. So I replaced the following jars: catalina-5.0.28.jar catalina-5.5.9.jar catalina-optional-5.5.9.jar commons-el.jar commons-modeler-1.1.0.jar jasper-compiler-jdt.jar jasper-compiler.jar jasper-runtime.jar jmx-5.0.28.jar jsp-api-2.0.jar naming-factory.jar naming-resources.jar servlet-api-2.4.jar servlets-default.jar tomcat-coyote.jar tomcat-http.jar tomcat-util.jar with: annotations-api.jar catalina.jar jasper.jar tomcat-dbcp.jar catalina-ant.jar el-api.jar jsp-api.jar tomcat-i18n-es.jar catalina-ha.jar jasper-el.jar servlet-api.jar tomcat-i18n-fr.jar catalina-tribes.jar jasper-jdt.jar tomcat-coyote.jar tomcat-i18n-ja.jar tomcat-juli.jar As soon as I start the server, I get the following message in the logs at INFO level: INFO: Starting Servlet Engine: Apache Tomcat/6.0.29 Dec 31, 2010 6:04:18 AM org.apache.catalina.loader.WebappClassLoader validateJarFile INFO: validateJarFile(/usr/local/blah/blue/./WEB-INF/lib/servlet-api.jar) - jar not loaded. See Servlet Spec 2.3, section 9.7.2. Offending class: javax/servlet/Servlet.class Based on the this explanation, I need to remove a jar file which has a conflicting Servlet.class. I swear to God, there is no other conflicting jar file, I grepped system wide for Servlet.class, it matched only servlet-api.jar. I also downloaded javaee.jar and replaced it by servlet-api.jar, to same avail. Having tried lot of these stuff, I did not have much to look upto, so set the tomcat logging level to ALL. In the log I could see that it is trying to check for Servlet.class in each and every jar it is loading until it finds servlet-api.jar and throws "jar not loaded" message as soon as it finds servlet-api.jar. See below: FINE: Checking for javax/servlet/Servlet.class Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappLoader setRepositories FINE: Deploy JAR /WEB-INF/lib/servlet-api.jar to /usr/local/blah/blue/./WEB-INF/lib/servlet-api.jar Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappClassLoader addJar FINE: addJar(/WEB-INF/lib/servlet-api.jar) Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappClassLoader validateJarFile FINE: Checking for javax/servlet/Servlet.class Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappClassLoader validateJarFile INFO: validateJarFile(/usr/local/blah/blue/./WEB-INF/lib/servlet-api.jar) - jar not loaded. See Servlet Spec 2.3, section 9.7.2. Offending class: javax/servlet/Servlet.class Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappLoader setRepositories Please note however, that Tomcat starts successfully! And as soon as I hit the URL on the browser, I get blank page(this may be in my case only, I guess 'cuz of my web.xml, sorta different from most. Other people on the internet have got Error 404 instead.) with following log statements(at finest level) Jan 2, 2011 9:40:01 AM org.apache.catalina.connector.CoyoteAdapter parseSessionCookiesId FINE: Requested cookie session id is 0FBA716E3F9B0147C3AF7ABAE3B1C27B Jan 2, 2011 9:40:01 AM org.apache.catalina.authenticator.AuthenticatorBase invoke FINE: Security checking request GET /login.jsp Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: Checking constraint 'SecurityConstraint[protected]' against GET /login.jsp --> false Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: Checking constraint 'SecurityConstraint[protected]' against GET /login.jsp --> false Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: Checking constraint 'SecurityConstraint[protected]' against GET /login.jsp --> false Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: Checking constraint 'SecurityConstraint[protected]' against GET /login.jsp --> false Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: No applicable constraint located Jan 2, 2011 9:40:01 AM org.apache.catalina.authenticator.AuthenticatorBase invoke FINE: Not subject to any constraint Jan 2, 2011 9:40:01 AM org.apache.catalina.core.StandardWrapper allocate FINEST: Returning non-STM instance I'm not sure if the above log message is important, but I'm for all-out disclosure here. One interesting thing though, I manually created a dummy jsp file containing only "helloooo" just outside WEB-INF folder(no security constraints for this file). This file was accessible and could be displayed. But, all my jsp's and classes are inside WEB-INF(ofcourse). Sick and tired of this issue, please help me solve it. I've already spent 20-24 hours on this unsuccessfully. Any pointers directions hints leads?

    Read the article

  • Compiling ruby1.9.1 hangs and fills swap!

    - by nfm
    I'm compiling Ruby 1.9.1-p376 under Ubuntu 8.04 server LTS (64-bit), by doing the following: $ ./configure $ make $ sudo make install ./configure works without complaints. make hangs indefinitely until all my RAM and swap is gone. It get stuck after the following output: compiling ripper make[1]: Entering directory `/tmp/ruby1.9.1/ruby-1.9.1-p376/ext/ripper' gcc -I. -I../../.ext/include/x86_64-linux -I../.././include -I../.././ext/ripper -I../.. -I../../. -DRUBY_EXTCONF_H=\"extconf.h\" -fPIC -O2 -g -Wall -Wno-parentheses -o ripper.o -c ripper.c If I run the gcc command by hand, with the -v argument to get verbose output, it hangs after the following: Using built-in specs. Target: x86_64-linux-gnu Configured with: ../src/configure -v --enable-languages=c,c++,fortran,objc,obj-c++,treelang --prefix=/usr --enable-shared --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --enable-nls --with-gxx-include-dir=/usr/include/c++/4.2 --program-suffix=-4.2 --enable-clocale=gnu --enable-libstdcxx-debug --enable-objc-gc --enable-mpfr --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu Thread model: posix gcc version 4.2.4 (Ubuntu 4.2.4-1ubuntu4) /usr/lib/gcc/x86_64-linux-gnu/4.2.4/cc1 -quiet -v -I. -I../../.ext/include/x86_64-linux -I../.././include -I../.././ext/ripper -I../.. -I../../. -DRUBY_EXTCONF_H="extconf.h" ripper.c -quiet -dumpbase ripper.c -mtune=generic -auxbase-strip ripper.o -g -O2 -Wall -Wno-parentheses -version -fPIC -fstack-protector -fstack-protector -o /tmp/ccRzHvYH.s ignoring nonexistent directory "/usr/local/include/x86_64-linux-gnu" ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/4.2.4/../../../../x86_64-linux-gnu/include" ignoring nonexistent directory "/usr/include/x86_64-linux-gnu" ignoring duplicate directory "../.././ext/ripper" ignoring duplicate directory "../../." #include "..." search starts here: #include <...> search starts here: . ../../.ext/include/x86_64-linux ../.././include ../.. /usr/local/include /usr/lib/gcc/x86_64-linux-gnu/4.2.4/include /usr/include End of search list. GNU C version 4.2.4 (Ubuntu 4.2.4-1ubuntu4) (x86_64-linux-gnu) compiled by GNU C version 4.2.4 (Ubuntu 4.2.4-1ubuntu4). GGC heuristics: --param ggc-min-expand=47 --param ggc-min-heapsize=32795 Compiler executable checksum: 6e11fa7ca85fc28646173a91f2be2ea3 I just compiled ruby on another computer for reference, and it took about 10 seconds to print the following output (after the above Compiler executable checksum line): COLLECT_GCC_OPTIONS='-v' '-I.' '-I../../.ext/include/i686-linux' '-I../.././include' '-I../.././ext/ripper' '-I../..' '-I../../.' '-DRUBY_EXTCONF_H="extconf.h"' '-D_FILE_OFFSET_BITS=64' '-fPIC' '-O2' '-g' '-Wall' '-Wno-parentheses' '-o' 'ripper.o' '-c' '-mtune=generic' '-march=i486' as -V -Qy -o ripper.o /tmp/cca4fa7R.s GNU assembler version 2.20 (i486-linux-gnu) using BFD version (GNU Binutils for Ubuntu) 2.20 COMPILER_PATH=/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/:/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/:/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/ LIBRARY_PATH=/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/4.4.1/../../../../lib/:/lib/../lib/:/usr/lib/../lib/:/usr/lib/gcc/i486-linux-gnu/4.4.1/../../../:/lib/:/usr/lib/ COLLECT_GCC_OPTIONS='-v' '-I.' '-I../../.ext/include/i686-linux' '-I../.././include' '-I../.././ext/ripper' '-I../..' '-I../../.' '-DRUBY_EXTCONF_H="extconf.h"' '-D_FILE_OFFSET_BITS=64' '-fPIC' '-O2' '-g' '-Wall' '-Wno-parentheses' '-o' 'ripper.o' '-c' '-mtune=generic' '-march=i486' I have absolutely no clue what could be going wrong here - any ideas where I should start?

    Read the article

  • Installing VS2010 components with SP1 beta

    - by Jesus Rodriguez
    Hello, I hope this is the place for this kind of question. I have VS2010 ult with the Service Pack 1 beta installed. I want to play with F# and I need to install it. The problem is that because I have the SP1 installed, the VS DVD can't install the F# compiler. I click on add new feature, I click on F#, the installer says: "A selected drive is not longer valid. Please review your installation path settings before continuing with setup." Of course the path is correct and I can't change it anyway... I think that problem is because I have the SP1. I had the same problem in the past with VS2008 + SP1. What can I do? Uninstalling VS2010 is not an option, it take ages. Thank you.

    Read the article

  • NTDS Replication Warning (Event ID 2089)

    - by Chris_K
    I have a simple little network with 3 AD servers in 2 sites. Site A has Win2k3 SP2 and Win2k SP4 servers, site B has a single Win2k3 SP2 server. All have been in place for at least 3 years now. Just last week I started getting Event 2089 "not backed up" warnings (example below) on both of the win2k3 servers. I understand what the message means, no need to send me links to the technet article explaining it. I'll improve my backups. What I'm more curious about is why did I just start getting this message now? Why haven't I been getting it for the past 3 years?!? Perhaps this is related: I recently decommissioned a few other sites and AD controllers (there used to be 3 more sites, each with their own controller). Don't worry, I did proper DCpromo exercises and made sure we didn't lose anything. But would shutting those down possibly be related to why I get this error now? This won't keep me awake at night but I am curious as to what changed... Event Type: Warning Event Source: NTDS Replication Event Category: Backup Event ID: 2089 Date: 3/28/2010 Time: 9:25:27 AM User: NT AUTHORITY\ANONYMOUS LOGON Computer: RedactedName Description: This directory partition has not been backed up since at least the following number of days. Directory partition: DC=MyDomain,DC=com 'Backup latency interval' (days): 30 It is recommended that you take a backup as often as possible to recover from accidental loss of data. However if you haven't taken a backup since at least the 'backup latency interval' number of days, this message will be logged every day until a backup is taken. You can take a backup of any replica that holds this partition. By default the 'Backup latency interval' is set to half the 'Tombstone Lifetime Interval'. If you want to change the default 'Backup latency interval', you could do so by adding the following registry key. 'Backup latency interval' (days) registry key: System\CurrentControlSet\Services\NTDS\Parameters\Backup Latency Threshold (days) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • Trying to setup catchall mail forwarding on my rackspace's cloudspace server

    - by bob_cobb
    I'm running Ubuntu 12 Precise Pangolin and am trying to configure my server to catchall mail sent to it and forward it to my gmail address. I've been trying lots of examples online like editing my main.cf file which looks like this: smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = destiny alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = destiny, localhost.localdomain, localhost relayhost = smtp.sendgrid.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 51200000 recipient_delimiter = + inet_interfaces = all inet_protocols = all In my /etc/postfix/virtual I have: @mydomain.com [email protected] @myotherdomain.com [email protected] Which isn't working when I email [email protected] or [email protected]. So I got the recommendation to add the following to my /etc/alias: postmaster:root root:[email protected] restarted postfix, and tried emailing [email protected] or [email protected] but it still won't send. Does anyone have any idea what I'm doing wrong here? I'd appreciate any help.

    Read the article

  • MySQL not releasing temp file descriptors

    - by Wakaru44
    Since a few days ago, we’ve been experiencing some serious problems with our MySQL installation: MySQL keeps opening temporal files (normal behaviour) but these files are never released. The consequence is that, eventually, the disk space is exhausted and we have to restart the service and clean up /tmp manually. Using lsof, we see something like this: mysqld 16866 mysql 5u REG 8,3 0 692 /tmp/ibyWJylQ (deleted) mysqld 16866 mysql 6u REG 8,3 0 707 /tmp/ibf5adsT (deleted) mysqld 16866 mysql 7u REG 8,3 0 728 /tmp/ibGjPRyW (deleted) mysqld 16866 mysql 8u REG 8,3 0 5678 /tmp/ibMQDLMZ (deleted) mysqld 16866 mysql 13u REG 8,3 0 5679 /tmp/ibQAnM42 (deleted) Maybe it's not related, but when we shutdown the server, the files are finally freed, and we can see the following warnings in the MySQL log: 121029 7:44:27 [Warning] /usr/local/mysql/bin/mysqld: Forcing close of thread 1333 user: 'xxx' 121029 7:44:27 [Warning] /usr/local/mysql/bin/mysqld: Forcing close of thread 1156 user: 'yyy' 121029 7:44:27 [Warning] /usr/local/mysql/bin/mysqld: Forcing close of thread 1151 user: 'zzz' where 'xxx', 'yyy' and 'zzz' are distinct mysql users (and the only 3 users with active connections to the database). We have a few theories: There is a problem in the OS, that keeps file handlers open. Could it be possible that the OS "delete" operation blocks the threads until shutdown? This may explain the warning at shutdown and the fact that files are finally deleted when the process dies. Until now, data sets were so small that temp files were relatively small and there was enough time to release the file handles without exhausting disk space. We are using Mysql 5.5 on a RHEL 6.2 with the default kernel.

    Read the article

  • Postfix: Using google apps for stmp errors

    - by Zed Said
    I am using postfix and need to send the mail using google apps smtp. I am getting errors after I thought I had set everything up correctly: May 11 09:50:57 zedsaid postfix/error[22214]: 00E009693FB: to=<[email protected]>, relay=none, delay=2466, delays=2462/3.4/0/0.06, dsn=4.7.0, status=deferred (delivery temporarily suspended: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.155.109]: no mechanism available) May 11 09:50:57 zedsaid postfix/error[22213]: 0ACB36D1B94: to=<[email protected]>, relay=none, delay=2486, delays=2482/3.4/0/0.06, dsn=4.7.0, status=deferred (delivery temporarily suspended: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.155.109]: no mechanism available) May 11 09:50:57 zedsaid postfix/error[22232]: 067379693D3: to=<[email protected]>, relay=none, delay=2421, delays=2417/3.4/0/0.06, dsn=4.7.0, status=deferred (delivery temporarily suspended: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.155.109]: no mechanism available) main.cf: # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters #smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem #smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = zedsaid.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = #relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all delay_warning_time = 4h smtpd_recipient_limit = 16 # how many error before back off. smtpd_soft_error_limit = 3 # how many max errors before blocking it. smtpd_hard_error_limit = 12 ## Gmail Relay relayhost = [smtp.gmail.com]:587 smtp_use_tls = yes smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_sasl_mechanism_filter = login smtp_tls_eccert_file = smtp_tls_eckey_file = smtp_use_tls = yes smtp_enforce_tls = no smtp_tls_CAfile = /etc/postfix/cacert.pem smtpd_tls_received_header = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport debug_peer_list = smtp.gmail.com debug_peer_level = 3 What am I doing wrong?

    Read the article

  • Making Python scripts more user friendly?

    - by Michael Morisy
    I have a bunch of python scripts I've put together that cut down on busy work, but I'd like to be able to share them in an easier-to-use format for others to be used internally. The scripts aren't accessing anything local, just open API's across a couple web apps. Ideally: a) Users wouldn't have to have a python compiler installed b) They can be using Windows when running it. c) It's simple enough they can just click something, and it will work. I've tried some of the Windows Python executable compilers, but none have really worked well and I was considering just uploading it to a webserver and putting up some basic password access protection around it Any suggestions for sharing scripts?

    Read the article

  • Windows Server 2008 (Web Server) Replication

    - by justjoshingyou
    We have a load balanced environment with Windows Server 2008. What are some best practices to setting up replication across the web servers? Do I only want to replicate the web folders? How about replicating IIS changes - or do I need to make IIS changes on every server? I've never, ever set up replication, but I have worked with a web farm that used it before. Basically, I only know the basics about how it works, and am looking for any advice, guides, warnings, etc on setting this up. If you'd like to offer any advice, I'll let you know how our environment is for now. We have 1 prod server up and the second is nearly ready to go. We are using a cloud system and all machines are VM's. I am in the process of setting up the domain controller now (as I need to have one for DFS). Any ideas on the best way to go about setting up replication? Should we just stick the prod server in from the start or set up using a test VM and our second server and then switch it up later? I do not want to risk overwriting our prod server. Thanks!

    Read the article

  • Installing PHP with iconv on Mac OSX 10.6 gives error: "Undefined symbols for architecture x86_64"

    - by Jason
    I am setting up PHP on my local web server with iconv, which should be there by default, but I am getting the following error: Undefined symbols for architecture x86_64: "_iconv", referenced from: __php_iconv_strlen in iconv.o _php_iconv_string in iconv.o __php_iconv_strpos in iconv.o __php_iconv_appendl in iconv.o _zif_iconv_substr in iconv.o _zif_iconv_mime_encode in iconv.o _php_iconv_stream_filter_append_bucket in iconv.o ... (maybe you meant: _zif_iconv_strlen, _zif_iconv_strpos , _zif_iconv_mime_decode , _php_iconv_string , _zif_iconv_set_encoding , _zif_iconv_get_encoding , _zif_iconv_mime_decode_headers , _iconv_functions , _zif_iconv_mime_encode , _zif_iconv_strrpos , _iconv_module_entry , _iconv_globals , _zif_iconv_substr , _php_if_iconv , _zif_ob_iconv_handler ) ld: symbol(s) not found for architecture x86_64 Now, presumably this means that my iconv is not 64 bit. I tried to download libiconv and manually reinstall it with 64 bit, as such: CFLAGS='-arch i386 -arch ppc -arch ppc64 -arch x86_64' CCFLAGS='-arch i386 -arch ppc -arch ppc64 -arch x86_64' CXXFLAGS='-arch i386 -arch ppc -arch ppc64 -arch x86_64' ./configure But I get the following error: configure: error: in `/Users/jason/Downloads/libiconv-1.13.1': configure: error: C compiler cannot create executables See `config.log' for more details. In my config.lob file, it says at the bottom: configure: exit 77 Any suggestions?

    Read the article

  • Can a conforming C implementation #define NULL to be something wacky

    - by janks
    I'm asking because of the discussion that's been provoked in this thread: http://stackoverflow.com/questions/2597142/when-was-the-null-macro-not-0/2597232 Trying to have a serious back-and-forth discussion using comments under other people's replies is not easy or fun. So I'd like to hear what our C experts think without being restricted to 500 characters at a time. The C standard has precious few words to say about NULL and null pointer constants. There's only two relevant sections that I can find. First: 3.2.2.3 Pointers An integral constant expression with the value 0, or such an expression cast to type void * , is called a null pointer constant. If a null pointer constant is assigned to or compared for equality to a pointer, the constant is converted to a pointer of that type. Such a pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function. and second: 4.1.5 Common definitions <stddef.h> The macros are NULL which expands to an implementation-defined null pointer constant; The question is, can NULL expand to an implementation-defined null pointer constant that is different from the ones enumerated in 3.2.2.3? In particular, could it be defined as: #define NULL __builtin_magic_null_pointer Or even: #define NULL ((void*)-1) My reading of 3.2.2.3 is that it specifies that an integral constant expression of 0, and an integral constant expression of 0 cast to type void* must be among the forms of null pointer constant that the implementation recognizes, but that it isn't meant to be an exhaustive list. I believe that the implementation is free to recognize other source constructs as null pointer constants, so long as no other rules are broken. So for example, it is provable that #define NULL (-1) is not a legal definition, because in if (NULL) do_stuff(); do_stuff() must not be called, whereas with if (-1) do_stuff(); do_stuff() must be called; since they are equivalent, this cannot be a legal definition of NULL. But the standard says that integer-to-pointer conversions (and vice-versa) are implementation-defined, therefore it could define the conversion of -1 to a pointer as a conversion that produces a null pointer. In which case if ((void*)-1) would evaluate to false, and all would be well. So what do other people think? I'd ask for everybody to especially keep in mind the "as-if" rule described in 2.1.2.3 Program execution. It's huge and somewhat roundabout, so I won't paste it here, but it essentially says that an implementation merely has to produce the same observable side-effects as are required of the abstract machine described by the standard. It says that any optimizations, transformations, or whatever else the compiler wants to do to your program are perfectly legal so long as the observable side-effects of the program aren't changed by them. So if you are looking to prove that a particular definition of NULL cannot be legal, you'll need to come up with a program that can prove it. Either one like mine that blatantly breaks other clauses in the standard, or one that can legally detect whatever magic the compiler has to do to make the strange NULL definition work. Steve Jessop found an example of way for a program to detect that NULL isn't defined to be one of the two forms of null pointer constants in 3.2.2.3, which is to stringize the constant: #define stringize_helper(x) #x #define stringize(x) stringize_helper(x) Using this macro, one could puts(stringize(NULL)); and "detect" that NULL does not expand to one of the forms in 3.2.2.3. Is that enough to render other definitions illegal? I just don't know. Thanks!

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >