Search Results

Search found 4432 results on 178 pages for 'fail'.

Page 149/178 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • numbers aren't right when reading text file, have to tally up number of 5 letter words and 6 or more

    - by user320950
    i want to do this: reads the words in the file one at a time. (Use a string to do this) Counts three things: how many single-character words are in the file, how many short (2 to 5 characters) words are in the file, and how many long (6 or more characters) words are in the file. HELP HERE im not sure on how about reading file into a string. i know i have to something like this but i dont understand the rest. HELP HERE ifstream infile; //char mystring[6]; //char mystring[20]; int main() { infile.open("file.txt"); if(infile.fail()) { cout << " Error " << endl; } int numb_char=0; char letter; while(!infile.eof()) { infile.get(letter); cout << letter; numb_char++; break; } cout << " the number of characters is :" << numb_char << endl; infile.close(); return 0;

    Read the article

  • Ruby on Rails: temporarily update an attribute into cache without saving it?

    - by randombits
    I have a bit of code that depicts this hypothetical setup below. A class Foo which contains many Bars. Bar belongs to one and only one Foo. At some point, Foo can do a finite loop that lapses 2+ iterations. In that loop, something like the following happens: bar = Bar.find_where_in_use_is_zero bar.in_use = 1 Basically what find_where_in_use_is_zero does something like this in as far as SQL goes: SELECT * from bars WHERE in_use = 0 Now the problem I'm facing is that I cannot run the following line of code after bar.in_use =1 is invoked: bar.save The reason is clear, I'm still looping and the new Foo hasn't been created, so we don't have a foo_id to put into bars.foo_id. Even if I set to allow foo_id to be NULL, we have a problem where one of the bars can fail validation and the existing one was saved to the database. In my application, that doesn't work. The entire request is atomic, either all succeeds or fails together. What happens next, is that in my loop, I have the potential to select the same exact bar that I did on a previous iteration of the loop since the in_use flag will not be set to 1 until @foo.save is called. Is there anyway to work around this condition and temporarily set the in_use attribute to 1 for subsequent iterations of the loop so that I retrieve an available bar instance?

    Read the article

  • nothrow or exception ?

    - by Muggen
    I am a student and I have small knowledge on C++, which I try to expand. This is more of a philosophical question.. I am not trying to implement something. Since #include <new> //... T * t = new (std::nothrow) T(); if(t) { //... } //... Will hide the Exception, and since dealing with Exceptions is heavier compared to a simple if(t), why isn't the normal new T() not considered less good practice, considering we will have to use try-catch() to check if a simple allocation succeeded (and if we don't, just watch the program die)?? What are the benefits (if any) of the normal new allocation compared to using a nothrow new? Exception's overhead in that case is insignificant ? Also, Assume that an allocation fails (eg. no memory exists in the system). Is there anything the program can do in that situation, or just fail gracefully. There is no way to find free memory on the heap, when all is reserved, is there? Incase an allocation fails, and an std::bad_alloc is thrown, how can we assume that since there is not enough memory to allocate an object (Eg. a new int), there will be enough memory to store an exception ?? Thanks for your time. I hope the question is in line with the rules.

    Read the article

  • Automated browser testing: How to test JavaScript in web pages?

    - by Dave
    I am trying to write an application that will test a series of web-pages programmatically. The web pages being tested have JavaScript embedded within them which alter the structure of the HTML when they complete execution. It is then the goal to take the final HTML (post-execution of the embedded JavaScript) and compare it against a known output. Essentially, the Input --- Output for the test application is: URL ---[retrieve HTML]--- HTML ---[execute JS, then compare]--- PASS/FAIL Here is the challenge: I have been unable to find a solution that is able to take the HTML I retrieve from the URL and process the JavaScript, as a browser would, and generate the final HTML a user might see from "View Source" on the same page within the browser. It would be very surprising if this sort of approach has not been made before, so I'm hoping someone out there knows of a fitting solution for this application/problem? If at all possible, I'm hoping for a solution that integrates with .NET (I've tried using the WebBrowser, with no luck). However, if there is an existing 3rd party application that can do exactly this, that would be quite acceptable. Thanks in advance for the suggestions! Dave

    Read the article

  • Java to JavaScript (Encryptation related)

    - by balexandre
    Hi guys, I'm having dificulties to get the same string in Javascript and I'm thinking that I'm doing something wrong... Java code: import java.io.UnsupportedEncodingException; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import java.util.Date; import java.util.GregorianCalendar; import sun.misc.BASE64Encoder; private static String getBase64Code(String input) throws UnsupportedEncodingException, NoSuchAlgorithmException { String base64 = ""; byte[] txt = input.getBytes("UTF8"); byte[] text = new byte[txt.length+3]; text[0] = (byte)239; text[1] = (byte)187; text[2] = (byte)191; for(int i=0; i<txt.length; i++) text[i+3] = txt[i]; MessageDigest md = MessageDigest.getInstance("MD5"); md.update(text); byte digest[] = md.digest(); BASE64Encoder encoder = new BASE64Encoder(); base64 = encoder.encode(digest); return base64; } I'm trying this using Paj's MD5 script as well Farhadi Base 64 Encode script but my tests fail completly :( my code: function CalculateCredentialsSecret(type, user, pwd) { var days = days_between(new Date(), new Date(2000, 1, 1)); var str = type.toUpperCase() + user.toUpperCase() + pwd.toUpperCase() + days; var md5 = any_md5('', str); var b64 = base64Encode(md5); return encodeURIComponent(b64); } Does anyone know how can I convert this Java method into a Javascript one? Thank you

    Read the article

  • Rails 4 testing bug?

    - by Jamato
    Situation: if we add two identic line items into a cart, we update line item quantity instead of adding a duplicate.In browser everything works fine but in unit testing section something fails because of an empty cycle in code. Which I wanted to use to update all prices. Why? Is that a unit test engine bug? LineItem.all and cart.line_items in process of testing produce two DIFFERENT structures. #<LineItem id: 980190964, product_id: 1, cart_id: 999, created_at: "2014-06-01 00:21:28", updated_at: "2014-06-01 00:21:28", quantity: 2, price: #<BigDecimal:ba0fb544,'0.4E1',9(27)>> #<LineItem id: 980190964, product_id: 1, cart_id: 999, created_at: "2014-06-01 00:21:28", updated_at: "2014-06-01 00:21:28", quantity: 1, price: #<BigDecimal:ba0d1b04,'0.4E1',9(27)>> cart.line_items guy did not update quantity Code itself (produces LineItem which is then saved in line_item_controller which calls this method) class Cart < ActiveRecord::Base has_many :line_items, dependent: :destroy def add_product(product_id) # LOOK THIS CYCLE BREAKS UNIT TEST, SRSLY, I MEAN IT line_items.each do |item| end current_item = line_items.find_by(product_id: product_id) fresh_price = Product.find_by(id: product_id).price if current_item current_item.quantity += 1 else current_item = line_items.build(product_id: product_id, price: fresh_price) end return current_item end ... Unit test code test "non-unique item added" do cart = Cart.new(:id => 999) line_item0 = cart.add_product(2) line_item0.save line_item1 = cart.add_product(1) line_item1.save assert_equal 2, cart.line_items.size #success line_item2 = cart.add_product(1) line_item2.save assert_equal 2, cart.line_items.size, "what?" assert cart.total_price > 15 #fail, prices are not enough, quantity of product1 = 1 #we get total price from quantity, it's a simple method in model end And once again: IT DOES WORK in browser as it should. Even with cycle. I feel so dumb right now...

    Read the article

  • The perverse hangman problem

    - by Shalmanese
    Perverse Hangman is a game played much like regular Hangman with one important difference: The winning word is determined dynamically by the house depending on what letters have been guessed. For example, say you have the board _ A I L and 12 remaining guesses. Because there are 13 different words ending in AIL (bail, fail, hail, jail, kail, mail, nail, pail, rail, sail, tail, vail, wail) the house is guaranteed to win because no matter what 12 letters you guess, the house will claim the chosen word was the one you didn't guess. However, if the board was _ I L M, you have cornered the house as FILM is the only word that ends in ILM. The challenge is: Given a dictionary, a word length & the number of allowed guesses, come up with an algorithm that either: a) proves that the player always wins by outputting a decision tree for the player that corners the house no matter what b) proves the house always wins by outputting a decision tree for the house that allows the house to escape no matter what. As a toy example, consider the dictionary: bat bar car If you are allowed 3 wrong guesses, the player wins with the following tree: Guess B NO -> Guess C, Guess A, Guess R, WIN YES-> Guess T NO -> Guess A, Guess R, WIN YES-> Guess A, WIN

    Read the article

  • SQL Server: Why use shorter VARCHAR(n) fields?

    - by chryss
    It is frequently advised to choose database field sizes to be as narrow as possible. I am wondering to what degree this applies to SQL Server 2005 VARCHAR columns: Storing 10-letter English words in a VARCHAR(255) field will not take up more storage than in a VARCHAR(10) field. Are there other reasons to restrict the size of VARCHAR fields to stick as closely as possible to the size of the data? I'm thinking of Performance: Is there an advantage to using a smaller n when selecting, filtering and sorting on the data? Memory, including on the application side (C++)? Style/validation: How important do you consider restricting colunm size to force non-sensical data imports to fail (such as 200-character surnames)? Anything else? Background: I help data integrators with the design of data flows into a database-backed system. They have to use an API that restricts their choice of data types. For character data, only VARCHAR(n) with n <= 255 is available; CHAR, NCHAR, NVARCHAR and TEXT are not. We're trying to lay down some "good practices" rules, and the question has come up if there is a real detriment to using VARCHAR(255) even for data where real maximum sizes will never exceed 30 bytes or so. Typical data volumes for one table are 1-10 Mio records with up to 150 attributes. Query performance (SELECT, with frequently extensive WHERE clauses) and application-side retrieval performance are paramount.

    Read the article

  • Virtual functions - base class pointer

    - by user980411
    I understood why a base class pointer is made to point to a derived class object. But, I fail to understand why we need to assign to it, a base class object, when it is a base class object by itself. Can anyone please explain that? #include <iostream> using namespace std; class base { public: virtual void vfunc() { cout << "This is base's vfunc().\n"; } }; class derived1 : public base { public: void vfunc() { cout << "This is derived1's vfunc().\n"; } }; int main() { base *p, b; derived1 d1; // point to base p = &b; p->vfunc(); // access base's vfunc() // point to derived1 p = &d1; p->vfunc(); // access derived1's vfunc() return 0; }

    Read the article

  • How to tell the Session to throw the error query[NHibernate]?

    - by xandy
    I made a test class against the repository methods shown below: public void AddFile<TFileType>(TFileType FileToAdd) where TFileType : File { try { _session.Save(FileToAdd); _session.Flush(); } catch (Exception e) { if (e.InnerException.Message.Contains("Violation of UNIQUE KEY")) throw new ArgumentException("Unique Name must be unique"); else throw e; } } public void RemoveFile(File FileToRemove) { _session.Delete(FileToRemove); _session.Flush(); } And the test class: try { Data.File crashFile = new Data.File(); crashFile.UniqueName = "NonUniqueFileNameTest"; crashFile.Extension = ".abc"; repo.AddFile(crashFile); Assert.Fail(); } catch (Exception e) { Assert.IsInstanceOfType(e, typeof(ArgumentException)); } // Clean up the file Data.File removeFile = repo.GetFiles().Where(f => f.UniqueName == "NonUniqueFileNameTest").FirstOrDefault(); repo.RemoveFile(removeFile); The test fails. When I step in to trace the problem, I found out that when I do the _session.flush() right after _session.delete(), it throws the exception, and if I look at the sql it does, it is actually submitting a "INSERT INTO" statement, which is exactly the sql that cause UNIQUE CONSTRAINT error. I tried to encapsulate both in transaction but still same problem happens. Anyone know the reason?

    Read the article

  • Wordpress curl save Images

    - by Jeton Ramadani
    I am working on saving images from external sites into a folder in my wordpress theme. And I was wondering if its Ok to call curl twice or can it be done with one time. Example: $data = get_url('http://www.veoh.com/watch/v19935546Y8hZPgbZ'); // getting the url first curl instance preg_match('/fullHighResImagePath="(.*?)"/', $data, $thumbnail); // find the image from content savePhoto($thumbnail, $post->ID); //2nd instance of curl to save the image function get_url($url) { $user_agent = "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2)"; $ytc = curl_init(); // initialize curl handle curl_setopt($ytc, CURLOPT_URL, $url); // set url to post to curl_setopt($ytc, CURLOPT_FAILONERROR, 1); // Fail on errors curl_setopt($ytc, CURLOPT_FOLLOWLOCATION, 1); // allow redirects curl_setopt($ytc, CURLOPT_RETURNTRANSFER, 1); // return into a variable curl_setopt($ytc, CURLOPT_PORT, 80); //Set the port number curl_setopt($ytc, CURLOPT_TIMEOUT, 15); // times out after 15s curl_setopt($ytc, CURLOPT_HEADER, 1); // include HTTP headers curl_setopt($ytc, CURLOPT_USERAGENT, $user_agent); $source = curl_exec($ytc); curl_close($ytc); $data = trim( $source ); return $data; } function savePhoto($remoteImage, $isbn) { $ch = curl_init(); curl_setopt ($ch, CURLOPT_URL, $remoteImage); curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt ($ch, CURLOPT_CONNECTTIMEOUT, 0); $fileContents = curl_exec($ch); curl_close($ch); if (DIRECTORY_SEPARATOR=='/'){ $absolute_path = dirname(__FILE__).'/'; } else { $absolute_path = str_replace('\\', '/', dirname(__FILE__)).'/'; } $newImg = imagecreatefromstring($fileContents); return imagejpeg($newImg, $absolute_path ."video_images/{$isbn}.jpg",100); }

    Read the article

  • Programmatically add an ISAPI extension dll in IIS 7 using ADSI?

    - by fretje
    I apologize beforehand, this is a cross post of this SO question. I thought I'd ask it there first, but apparently it doesn't harvest any answers there. I hope it will get more attention here. When I have an answer somewhere, I'll delete the other one. I'm trying to programmatically add an ISAPI extension dll in IIS using ADSI. This has been working for ages on previous versions of IIS, but it seems to fail on IIS 7. I am using similar code like shown in this question: var web = GetObject("IIS://localhost/W3SVC/1/ROOT/specificVirtualDirectory"); var maps = web.ScriptMaps.toArray(); map[maps.length] = ".aaa,c:\\path\\to\\isapi\\extension.dll,1,GET,POST"; web.ScriptMaps = maps.asDictionary(); web.SetInfo(); After executing that code, I do see an "AboMapperCustom-12345678" entry for that specific dll in the "Handler mappings" of the specific virtual directory in which I added the script map. But when I try to use that extension in a browser, I always get HTTP Error 404.2 Not Found The page you are requesting cannot be served because of the ISAPI and CGI Restriction list settings on the Web server. Even after adding an entry to allow that specific dll in the "ISAPI and CGI restrictions", I keep getting that error. To make it actually work, I first have to undo these steps (encountering the same issue like the OP of the question mentioned above: after deleting the script map entry from the IIS manager GUI, I also have to programmatically delete it using ADSI before it's actually gone from the metabase). And then manually add an entry like this: inetmgr - webserver - website - virtual directory - handler mappings - add script map... path = *.dll, executable = <path to dll>, name = <doesn't matter, but it's mandatory> click "yes" on the question "do you want to allow this ISAPI extension?" When I compare the 2 entries, they are exactly the same, except for the "Entry Type" which seems to be "Inherited" for the programmatically added one and "Local" for the one added manually. The strange thing is, even though it says "Inherited", I don't see it anywhere in IIS on a higher level. Where is it inheriting from? In my code, I do add the script map to the specific virtual directory so it should be "Local" as well. Maybe there is the problem, but I don't know how to add a "Local" Script Map using ADSI. I really would like to keep using the ADSI method, as otherwise I will have to use different methods in our setup when working with IIS 7 or previous versions, and I would like to avoid that. To recap: How can I programmatically add a script map entry and its companion CGI and ISAPI restrictions entry to IIS 7 using ADSI? Anybody who can shed some light on this? Any help appreciated.

    Read the article

  • SBS 2008 Backup Drive Full - Error Code '2147942512'

    - by HK1
    We are using Windows Backup on SBS 2008 SP2 and backing up to 1TB external hard drives. Recently after switching drives our backup started failing because the backup drive is full and auto-delete isn't automatically deleting older backups/show copies. I'm trying to get more information to help me effectively prevent this problem from reoccurring in the future. How I can tell that the drive is getting full: In the event viewer under Windows Logs Application, I'm seeing Event ID 517 but it fails to show an intelligible description. However, under Applications and Services Logs Microsoft Windows Backup Operational, I'm seeing an event with the ID of 5 and a description like this: Backup started at '10/4/2011 12:30:12 PM' failed with following error code '2147942512'. One of the most informative posts I've found on this error is located on Microsoft's Technet Forums here. In that post, a Microsoft representative gives this hazy explanation: auto-delete feature to ensure that at least some old backup copies are maintained on the disk -- does not automatically delete backups if space utilization by older copies is less than 1/8 of the disk size or in other words, 13% of the disk size. that means if the one full backup copy does not fit in the 7/8 of the disk size, backup may fail with disk full error. auto-delete will not automatically delete older versions to reclaim more older versions of backup. In the above explanation, I do not understand what is meant by "older copies" except that it appears that anything older than the very last shadow copy would be considered "older copies". I'm going to make the assumption that this problem where auto-delete will not work will affect any hard drive that is large enough to make an effective backup drive, or in other words, any hard drive that is large enough to hold more than one backup/shadow copy at once. The same MS representative proposes the solution of using a larger backup drive. I can't understand how this will help. It appears to me it will simply delay the problem until a later date. In order to resolve this problem for now, I did the following: Assign the backup drive a disk letter under disk management. Run the command line with Administrative rights. diskshadow.exe [enter] delete shadows oldest x: [enter] (where X: is the letter you assigned your backup drive) I manually ran the above command some 60 or 80 times to free up about 200 GB of space on my 1 Terrabyte External Hard drive. However, I do not feel this is a satisfactory solution to prevent the problem from happening again in the future. Does anyone have a solution to prevent your Windows Server backup drive from getting full?

    Read the article

  • Indesign Import XML into Automatic Page generation, data merge

    - by taudep
    I've created some InDesign Pages that I want to use as templates. I've created an XML file with all the appropriate data. I want to merge the XML data with the InDesign page and have a few hundred pages automatically generated. I've been reading online and working with InDesign's "Import XML" features without any luck. The documentation has been pretty poor for me. And Google searches haven't returned much fruitful. Edit: I'm updating this to now include my present steps 1) I create a Master Page of my template 2) I add a bunch of text frames where I want the imported data from the XML file to be places 3) I open the "Tags" window and Import and XML file 4) I mark my text frames in the Master Document with the appropriate tags 5) I then add a lot of pages (like 200) to the document 6) Then I use "Import XML" to try and get the data brought in and filled across all 200 pages. This is where I fail. So there's something I'm missing. It might be that InDesign doesn't work as I'm expecting... Anyone have any good tips for mail-merge like functionality with an XML document and auto-generation of InDesign pages? BTW, here's an example of Adobe's great documentation for merging repeated XML elements. There's gotta be more...InDesign CS4 Docs: XML-Importing XML-Working with Repeating Data EDIT: Here's some of the sample XML, notice the ITEM will repeat. I've also truncated the data in the "desc" tag: <output> <item> <user_name>taude</user_name> <date>2009-02-21</date> <title>Wishful Thinking</title> <desc>Skiing up in Vermont on a beautiful day. This photo of</desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/96104200949a162672e1996.15963073.jpeg</thumbnail> </item> <item> <user_name>taude</user_name> <date>2009-02-22</date> <title>Skiing Self Portrait</title> <desc>I was inspired by ML's self-portrait while </desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/36547696749a2c5782308e0.91477014.jpeg</thumbnail> </item> </output> Here's what my imported XML looks like with the InDesign Structure

    Read the article

  • PXE-E32 TFTP Open Timeout While Attempting to PXE Boot from Windows Deployment Services

    - by bschafer
    I'm running Windows Deployment Services on Windows Server 2008 R2 on top of an ESX 4.0 box. This is the only function of this VM instance, although it had previously functioned as an AD Domain Controller. My DHCP server is running on our primary Domain Controller, which is also Server 2008 R2, but running on metal. Everything was working perfectly until we recently had our backup generator fail during a power outage, causing all of our servers and networking equipment to lose power for a period of time. When we brought all of our equipment back up, everything was working as expected except for WDS. Our network is split up into several different vlans. Now, depending on which vlan the client computer is on, it's behaving differently when attempting to PXE boot into WDS. Our servers are located on the 10.55.x.x vlan, which, due to the nature of it, has no DHCP server active in it. The first computer we plugged in happened to be in the 10.99.x.x vlan, which is supposed to be reserved for network management devices (i.e. switches), but we've been using it occasionally otherwise. That computer gave us PXE-E11 ARP Timeout errors. When we moved to a different computer on the 10.19.x.x vlan (for general purpose use), it finally gets an IP from DHCP, but it presents us with a very stumping PXE-E32 TFTP Open Timeout error. Before the power outage, it didn't matter which vlan a device was on; it would PXE boot and image just fine. I've made no changes to anything server-side. Everything is configured exactly the same way it was on my WDS and DHCP servers as before the power outage. I've tried several different computers, including different models. All of this, combined with the quirky behavior depending on the vlan, makes me think something went wrong in one or more of our switches, probably because of the power outage. Unfortunately, I'm no network guy, and I know very little about how to configure our switches properly. Is this an issue with switches, etc? If so, how can I fix it? Is there some magical option I'm not aware of? Does anybody out there have any hunches? I've pretty much exhausted my ideas. Our main switch is an HP Procurve 5406. We also have 3x HP Procurve 4208 switches. The ESX Server is an HP ProLiant DL380 G6. The WDS VM is currently using the VMXNET3 network adaptor, but we've also tried the E1000 adaptor.

    Read the article

  • dovecot imap ssl certificate issues

    - by mulllhausen
    i have been trying to configure my dovecot imap server (version 1.0.10 - upgrading is not an option at this stage) with a new ssl certificate on ubuntu like so: $ grep ^ssl /etc/dovecot/dovecot.conf ssl_disable = no ssl_cert_file = /etc/ssl/certs/mydomain.com.crt.20120904 ssl_key_file = /etc/ssl/private/mydomain.com.key.20120904 $ /etc/init.t/dovecot stop $ sudo dovecot -p $ [i enter the ssl password here] it doesn't show any errors and when i run ps aux | grep dovecot i get root 21368 0.0 0.0 12452 688 ? Ss 15:19 0:00 dovecot -p root 21369 0.0 0.0 71772 2940 ? S 15:19 0:00 dovecot-auth dovecot 21370 0.0 0.0 14140 1904 ? S 15:19 0:00 pop3-login dovecot 21371 0.0 0.0 14140 1900 ? S 15:19 0:00 pop3-login dovecot 21372 0.0 0.0 14140 1904 ? S 15:19 0:00 pop3-login dovecot 21381 0.0 0.0 14280 2140 ? S 15:19 0:00 imap-login dovecot 21497 0.0 0.0 14280 2116 ? S 15:29 0:00 imap-login dovecot 21791 0.0 0.0 14148 1908 ? S 15:48 0:00 imap-login dovecot 21835 0.0 0.0 14148 1908 ? S 15:53 0:00 imap-login dovecot 21931 0.0 0.0 14148 1904 ? S 16:00 0:00 imap-login me 21953 0.0 0.0 5168 944 pts/0 S+ 16:02 0:00 grep --color=auto dovecot which looks like it is all running fine. so then i test to see if i can telnet to the dovecot server, and this works fine: $ telnet localhost 143 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. * OK Dovecot ready. but when i test whether dovecot has configured the ssl certificates properly, it appears to fail: $ sudo openssl s_client -connect localhost:143 -starttls imap CONNECTED(00000003) depth=0 /description=xxxxxxxxxxxxxxxxx/C=AU/ST=xxxxxxxx/L=xxxx/O=xxxxxx/CN=*.mydomain.com/[email protected] verify error:num=20:unable to get local issuer certificate verify return:1 depth=0 /description=xxxxxxxxxxx/C=AU/ST=xxxxxx/L=xxxx/O=xxxx/CN=*.mydomain.com/[email protected] verify error:num=27:certificate not trusted verify return:1 depth=0 /description=xxxxxxxx/C=AU/ST=xxxxxxxxxx/L=xxxx/O=xxxxx/CN=*.mydomain.com/[email protected] verify error:num=21:unable to verify the first certificate verify return:1 --- Certificate chain 0 s:/description=xxxxxxxxxxxx/C=AU/ST=xxxxxxxxxx/L=xxxxxxxx/O=xxxxxxx/CN=*.mydomain.com/[email protected] i:/C=IL/O=StartCom Ltd./OU=Secure Digital Certificate Signing/CN=StartCom Class 2 Primary Intermediate Server CA --- Server certificate -----BEGIN CERTIFICATE----- xxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxx . . . xxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxx== -----END CERTIFICATE----- subject=/description=xxxxxxxxxx/C=AU/ST=xxxxxxxxx/L=xxxxxxx/O=xxxxxx/CN=*.mydomain.com/[email protected] issuer=/C=IL/O=StartCom Ltd./OU=Secure Digital Certificate Signing/CN=StartCom Class 2 Primary Intermediate Server CA --- No client certificate CA names sent --- SSL handshake has read 2831 bytes and written 342 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 2048 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: xxxxxxxxxxxxxxxxxxxx Session-ID-ctx: Master-Key: xxxxxxxxxxxxxxxxxx Key-Arg : None Start Time: 1351661960 Timeout : 300 (sec) Verify return code: 21 (unable to verify the first certificate) --- . OK Capability completed. at least, i'm assuming this is a failure???

    Read the article

  • Excel 2007 VBA macros don't work in Parallels

    - by MindModel
    I've got a complex Excel spreadsheet I need to use at work. My colleagues use the spreadsheet on Windows PC's, with no special configuration required. I want to run it on a MacBook Pro running Snow Leopard. The spreadsheet contains VBA macros which connect to external Oracle db's over the Internet. If I understand correctly, Excel on the Mac doesn't run VBA macros, so I have to use Parallels. I installed Parallels on the Mac and it's running correctly, as far as I can tell. I installed Excel 2007 under Parallels. I can open the Excel spreadsheet in Parallels and click buttons in the spreadsheet to run macros, but the macros fail with compiler errors. I don't have the password to the source code for the VBA macros, and if possible, I don't want to dig in to the code at that level. I know that there are quite a few things that could go wrong, and examining the VBA code might help, but I'm hoping to solve the problem without going down that road. The spreadsheet runs without any special configuration on Windows, so I'm wondering if anyone out there knows of any limitations of Excel VBA macros under Parallels, or anything else I could do to get this spreadsheet working. It's the only thing that's keeping me from using this MacBook Pro at work. Here is the error message: Compile error in hidden module: clsXXXXx0020Toolx0020Ser. This error commonly occurs when code is incompatible with the version, platform, or architecture of this application. Click Help for more info. Compile error in hidden module: A protected module contains a compilation error. Because the error is in a protected module it cannot be displayed. This error commonly occurs when code is incompatible with the version or architecture of this application (for example, code in a document targets 32-bit Microsoft Office applications but it is attempting to run on 64-bit Office). This error has the following cause and solution: Cause of the error: The error is raised when a compilation error exists in the VBA code inside a protected (hidden) module. The specific compilation error is not exposed because the module is protected. Possible solutions: If you have access to the VBA code in the document or project, unprotect the module, and then run the code again to view the specific error. If you do not have access to the VBA code in the document, then contact the document author to have the code in the hidden module updated.

    Read the article

  • Problem virtualizing Ubuntu 10.04 32 bit on VirtualBox 3.1 on Windows Vista 64 bit

    - by Adam Siddhi
    Software & Hardware Setup Host System : Windows Vista Home Premium SP1 64 bit Guest : Ubuntu 10.04 (ubuntu-10.04-desktop-i386.iso) 32 bit VM : VirtualBox 3.1.8 Hardware : Intel Core 2 Duo T6400 4GB SDRAM What Happened I followed the tutorial called Installing Ubuntu inside Windows using VirtualBox located here: www.psychocats.net/ubuntu/virtualbox At first I downloaded ubuntu-10.04-desktop-amd64.iso because I figured that it would be a perfect fit with my Vista 64 OS. I was wrong because it turns out the my Intel Core 2 Duo T6400 CPU does not have Intel® Virtualization Technology. So I had to go with the ubuntu-10.04-desktop-i386.iso which is 32 bit. This got me to the point where I could actually create the Ubuntu VM. So I set up the VM in VirtualBox (according to the tutorial I was following) to prepare for the Ubuntu 10.04 virtualization. Please go to my Picassa web album to see the screen shots of my VM settings and Ubuntu boot process so you can see what I experienced (they appear in the order that I experienced them in). www.picasaweb.google.com/rubysiddhi/ProblemVirtualizingUbuntu100432BitOnVirtualBox31OnWindowsVista64# The first 17 images show the VM settings. The last 8 show my attempt at virtualizing Ubuntu 10.04. You can see booting up but ultimately failing. The Specifics The one error message I got was: (process:210): GLib-WARNING **: getpwuid_r(): failed due to unknown user id (0) It appeared on a black screen that sort of looked like a Windows console screen but with out the c:\ or the ability to type. Then this error message got more complex when tons of text appeared in the screen. Pictures 23 - 25 in the album show this text. I should also mention that I found this post in the Ubuntu forums by zonination who seemed to have similar problems to mine even though they had a different set up. The main issue I think zonination and me may be having is the fact that we can not change the color mode to 32 bit while it is booting. I think the 16 bit color mode maybe making Ubuntu fail. Not certain though. Well I hope I explained my problem thoroughly and clearly. Thanks for the tutorial. It got me started but, now I hope to finish this process so I can start developing in Ubuntu. OH by the way if you want to actually see what happened play by play (with some classical in the background) check out the video I made over here: http://www.youtube.com/watch?v=XMbbm5E_0Xw Thanks! Regards, Adam

    Read the article

  • How to allow local LAN access while connected to Cisco VPN?

    - by Ian Boyd
    How can I maintain local LAN access while connected to Cisco VPN? When connecting using Cisco VPN, the server has to ability to instruct the client to prevent local LAN access. Assuming this server-side option cannot be turned off, how can allow local LAN access while connected with a Cisco VPN client? I used to think it was simply a matter of routes being added that capture LAN traffic with a higher metric, for example: Network Destination Netmask Gateway Interface Metric 10.0.0.0 255.255.0.0 10.0.0.3 10.0.0.3 20 <--Local LAN 10.0.0.0 255.255.0.0 192.168.199.1 192.168.199.12 1 <--VPN Link And trying to delete the 10.0.x.x -> 192.168.199.12 route don't have any effect: >route delete 10.0.0.0 >route delete 10.0.0.0 mask 255.255.0.0 >route delete 10.0.0.0 mask 255.255.0.0 192.168.199.1 >route delete 10.0.0.0 mask 255.255.0.0 192.168.199.1 if 192.168.199.12 >route delete 10.0.0.0 mask 255.255.0.0 192.168.199.1 if 0x3 And while it still might simply be a routing issue, attempts to add or delete routes fail. At what level is Cisco VPN client driver doing what in the networking stack that takes overrides a local administrator's ability to administer their machine? The Cisco VPN client cannot be employing magic. It's still software running on my computer. What mechanism is it using to interfere with my machine's network? What happens when an IP/ICMP packet arrives on the network? Where in the networking stack is the packet getting eaten? See also No internet connection with Cisco VPN Cisco VPN Client interrupts connectivity to my LDAP server Cisco VPN stops Windows 7 Browsing How can I prohibit the creation of a route in Windows XP upon connection to Cisco VPN? Rerouting local LAN and Internet traffic when in VPN VPN Client "Allow local LAN Access" Allow Local LAN Access for VPN Clients on the VPN 3000 Concentrator Configuration Example LAN access gone when I connect to VPN Windows XP Documentation: Route Edit: Things I've not yet tried: >route delete 10.0.* Update: Since Cisco has abandoned their old client, in favor of AnyConnect (HTTP SSL based VPN), this question, unsolved, can be left as a relic of history. Going forward, we can try to solve the same problem with their new client.

    Read the article

  • Apache : Illegal override option FileInfo

    - by Kave
    I have installed a new Ubuntu 12.04 Server and setup Apache and MySQL. I am just trying to replicate what I have in my current server and came across one single problem. - FileInfo Within these two files below: /etc/apache2/sites-available/default-ssl /etc/apache2/sites-available/default I need to add some overrides for the apache server. Original: <Directory /var/www/MySite> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> New: <Directory /var/www/MySite> Options Indexes FollowSymLinks MultiViews AllowOverride FileInfo, Indexes Order allow,deny allow from all </Directory> I have installed the following mods for Apache: sudo apt-get install lamp-server^ -y sudo apt-get install apache2.2-common apache2-utils openssl openssl-blacklist openssl-blacklist-extra -y sudo apt-get install curl libcurl3 libcurl3-dev php5-curl -y sudo apt-get install php5-tidy -y sudo apt-get install php5-gd -y sudo apt-get install php-apc -y sudo apt-get install memcached -y sudo apt-get install php5-memcache -y sudo a2enmod ssl sudo a2enmod rewrite sudo a2enmod headers sudo a2enmod expires sudo a2enmod php5 So When I do a restart with AllowOverride None, its all ok. sudo /etc/init.d/apache2 restart * Restarting web server apache2 ... waiting [OK] But as soon as I change the AllowOverride to FileInfo, Indexes Syntax error on line 11 of /etc/apache2/sites-enabled/000-default: Illegal override option FileInfo, Action 'configtest' failed. The Apache error log may have more information. ...fail! I can't see anything unusual in the error.log [Wed Jun 06 08:23:51 2012] [notice] caught SIGTERM, shutting down [Wed Jun 06 08:23:52 2012] [warn] RSA server certificate CommonName (CN) `mySite.com' does NOT match server name!? [Wed Jun 06 08:23:52 2012] [warn] RSA server certificate CommonName (CN) `mySite.com' does NOT match server name!? [Wed Jun 06 08:23:52 2012] [notice] Apache/2.2.22 (Ubuntu) PHP/5.3.10-1ubuntu3.1 with Suhosin-Patch mod_ssl/2.2.22 OpenSSL/1.0.1 configured -- resuming normal operations I get that warning because its a test server, nonetheless I get the same warning with AllowOverride None and yet it restarts the Apache server correctly. Therefore this warning should be harmless. Have I missed something? Thanks,

    Read the article

  • Indesign Import XML into Automatic Page generation, data merge

    - by taudep
    I've created some InDesign Pages that I want to use as templates. I've created an XML file with all the appropriate data. I want to merge the XML data with the InDesign page and have a few hundred pages automatically generated. I've been reading online and working with InDesign's "Import XML" features without any luck. The documentation has been pretty poor for me. And Google searches haven't returned much fruitful. Here are my present steps: I create a Master Page of my template I add a bunch of text frames where I want the imported data from the XML file to be places I open the "Tags" window and Import and XML file I mark my text frames in the Master Document with the appropriate tags I then add a lot of pages (like 200) to the document Then I use "Import XML" to try and get the data brought in and filled across all 200 pages. This is where I fail. There's something I'm missing. It might be that InDesign doesn't work as I'm expecting... Does anyone have any good tips for mail-merge like functionality with an XML document and auto-generation of InDesign pages? By the way, here's an example of Adobe's great documentation for merging repeated XML elements. There's got to be more... InDesign CS4 Docs: XML-Importing XML-Working with Repeating Data Here's some of the sample XML, notice the ITEM will repeat. I've also truncated the data in the "desc" tag: <output> <item> <user_name>taude</user_name> <date>2009-02-21</date> <title>Wishful Thinking</title> <desc>Skiing up in Vermont on a beautiful day. This photo of</desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/96104200949a162672e1996.15963073.jpeg</thumbnail> </item> <item> <user_name>taude</user_name> <date>2009-02-22</date> <title>Skiing Self Portrait</title> <desc>I was inspired by ML's self-portrait while </desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/36547696749a2c5782308e0.91477014.jpeg</thumbnail> </item> </output> Here's what my imported XML looks like with the InDesign Structure:

    Read the article

  • vagrant up command very slow on OS X Lion

    - by Andy Hume
    When I run vagrant up to provision a new VM on Lion it takes an extremely long time, during which the entire Mac is very laggy and unresponsive. The output is as follows, the key point being the "notice: Finished catalog run in 754.28 seconds" > vagrant up [default] Importing base box 'lucid64'... [default] The guest additions on this VM do not match the install version of VirtualBox! This may cause things such as forwarded ports, shared folders, and more to not work properly. If any of those things fail on this machine, please update the guest additions and repackage the box. Guest Additions Version: 4.1.0 VirtualBox Version: 4.1.6 [default] Matching MAC address for NAT networking... [default] Clearing any previously set forwarded ports... [default] Forwarding ports... [default] -- ssh: 22 => 2222 (adapter 1) [default] -- web: 80 => 4567 (adapter 1) [default] Creating shared folders metadata... [default] Running any VM customizations... [default] Booting VM... [default] Waiting for VM to boot. This can take a few minutes. [default] VM booted and ready for use! [default] Mounting shared folders... [default] -- v-root: /vagrant [default] -- v-data: /var/www [default] -- manifests: /tmp/vagrant-puppet/manifests [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with lucid64.pp... [default] stdin: is not a tty [default] notice: /Stage[main]/Lucid64/Exec[apt-update]/returns: executed successfully [default] [default] notice: /Stage[main]/Lucid64/Package[apache2]/ensure: ensure changed 'purged' to 'present' [default] [default] notice: /Stage[main]/Lucid64/File[/etc/motd]/ensure: defined content as '{md5}a25e31ba9b8489da9cd5751c447a1741' [default] [default] notice: Finished catalog run in 754.28 seconds [default] [default] err: /File[/var/lib/puppet/rrd]/ensure: change from absent to directory failed: Could not find group puppet [default] [default] err: Could not send report: Got 1 failure(s) while initializing: change from absent to directory failed: Could not find group puppet [default] [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with lucid64.pp... [default] stdin: is not a tty [default] notice: /Stage[main]/Lucid64/Exec[apt-update]/returns: executed successfully [default] [default] notice: Finished catalog run in 2.05 seconds [default] [default] err: /File[/var/lib/puppet/rrd]: Could not evaluate: Could not find group puppet [default] [default] err: Could not send report: Got 1 failure(s) while initializing: Could not evaluate: Could not find group puppet [default] [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with lucid64.pp... [default] stdin: is not a tty [default] notice: /Stage[main]/Lucid64/Exec[apt-update]/returns: executed successfully [default] [default] notice: Finished catalog run in 1.36 seconds [default] [default] err: /File[/var/lib/puppet/rrd]: Could not evaluate: Could not find group puppet [default] [default] err: Could not send report: Got 1 failure(s) while initializing: Could not evaluate: Could not find group puppet [default] >

    Read the article

  • Install Ubuntu 12.04 in UEFI mode on a HP Pavilion dv6-6c40ca

    - by Marlen T. B.
    I have recently (as of July 2012) bought a HP Pavilion dv6-6c40ca laptop. It came pre-installed with Windows 7 on an MBR. I installed Ubuntu 12.04 on it on a GPT partition in what I think is BIOS emulation mode. I made a BIOS-Grub partition so the install didn't fail. That is what it is for .. right? Now I want to upgrade to UEFI mode. How would I Install Ubuntu 12.04 in UEFI mode on a HP Pavilion dv6-6c40ca. Or is it impossible? My laptop, despite its new age may not be UEFI 2.0+ capable. If it isn't how can I install a software UEFI (i.e. a DUET such as the one by tianocore). Or is this too impossible? A link to my laptop's specs is: http://h10025.www1.hp.com/ewfrf/wc/document?docname=c03137924&tmp_task=prodinfoCategory&cc=ca&dlc=en&lang=en&lc=en&product=5218530 My laptop should have a UEFI given this link from HP http://h10025.www1.hp.com/ewfrf/wc/document?cc=us&lc=en&docname=c01442956#N218. And from the link I draw a quote: That means most notebooks distributed with Windows Vista, and all notebooks distributed with Windows 7, have the UEFI environment. My laptop had Windows 7 Home Premium pre-installed. OK. Following the comments so far -- NOTE: I am trying to do this on an external drive so I can see if it works. I have partitioned the drive using GParted as a GPT drive. Created a 200MB partition at the beginning of the drive with a FAT32 file system. Given the 200MB partition a label of "EFI". Set the boot flag on the 200MB partition. What should a do next to install Ubuntu 12.04? Given the link: https://help.ubuntu.com/community/UEFIBooting#Selecting_the_.28U.29EFI_Graphic_Protocol In my first read through (just to see if I will understand everything before I start) I get to step 2.3 Install GRUB2 in (U)EFI systems The first line is Boot into Linux (any live ISO) preferably in UEFI mode. Um .. how do you tell what mode your live CD is in?! And how do you change it if the mode is wrong?

    Read the article

  • UNIX - mount: only root can do that

    - by Travesty3
    I need to allow a non-root user to mount/unmount a device. I am a total noob when it comes to UNIX, so please dumb it down for me. I've been looking all over teh interwebz to find an answer and it seems everyone is giving the same one, which is to modify /etc/fstab to include that device with the 'user' option (or 'users', tried both). Cool, well I did that and it still says "mount: only root can do that". Here are the contents of my fstab: # /etc/fstab: static file system information. # # Use 'vol_id --uuid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # proc /proc proc defaults 0 0 # / was on /dev/mapper/minicc-root during installation UUID=1a69f02a-a049-4411-8c57-ff4ebd8bb933 / ext3 relatime,errors=remount-ro 0 1 # /boot was on /dev/sda5 during installation UUID=038498fe-1267-44c4-8788-e1354d71faf5 /boot ext2 relatime 0 2 # swap was on /dev/mapper/minicc-swap_1 during installation UUID=0bb583aa-84a8-43ef-98c4-c6cb25d20715 none swap sw 0 0 /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0 /dev/scd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 /dev/sdb1 /mnt/sdcard auto auto,user,rw,exec 0 0 My thumb drive partition shows up as /dev/sdb1. I'm pretty sure my fstab is set up OK, but everyone on the other posts seems to fail to mention how they actually call the 'mount' command once this entry is in the fstab file. I think this is where my problem may be. The command I use to mount the drive is: $ mount /dev/sdb1 /mnt/sdcard. /bin/mount is owned by root and is in the root group and has 4755 permissions. /bin/umount is owned by root and is in the root group and has 4755 permissions. /mnt/sdcard is owned by me and is in one of my groups and has 0755 permissions. My mount command works fine if I use sudo, but I need to be able to do this without sudo (need to be able to do it from a PHP script using shell_exec). Any suggestions? Sorry for making you read so much...just trying to get as much info in the initial post as possible to preemptively answer questions about configuration stuff. If I missed anything tho, ask away. Thanks! -Travis

    Read the article

  • How to properly add .NET assemblies to Powershell session?

    - by amandion
    I have a .NET assembly (a dll) which is an API to backup software we use here. It contains some properties and methods I would like to take advantage of in my Powershell script(s). However, I am running into a lot of issues with first loading the assembly, then using any of the types once the assembly is loaded. The complete file path is: C:\rnd\CloudBerry.Backup.API.dll In Powershell I use: $dllpath = "C:\rnd\CloudBerry.Backup.API.dll" Add-Type -Path $dllpath I get the error below: Add-Type : Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. At line:1 char:9 + Add-Type <<<< -Path $dllpath + CategoryInfo : NotSpecified: (:) [Add-Type], ReflectionTypeLoadException + FullyQualifiedErrorId : System.Reflection.ReflectionTypeLoadException,Microsoft.PowerShell.Commands.AddTypeComma ndAdd-Type : Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. Using the same cmdlet on another .NET assembly, DotNetZip, which has examples of using the same functionality on the site also does not work for me. I eventually find that I am seemingly able to load the assembly using reflection: [System.Reflection.Assembly]::LoadFrom($dllpath) Although I don't understand the difference between the methods Load, LoadFrom, or LoadFile that last method seems to work. However, I still seem to be unable to create instances or use objects. Each time I try, I get errors that describe that Powershell is unable to find any of the public types. I know the classes are there: $asm = [System.Reflection.Assembly]::LoadFrom($dllpath) $cbbtypes = $asm.GetExportedTypes() $cbbtypes | Get-Member -Static ---- start of excerpt ---- TypeName: CloudBerryLab.Backup.API.BackupProvider Name MemberType Definition ---- ---------- ---------- PlanChanged Event System.EventHandler`1[CloudBerryLab.Backup.API.Utils.ChangedEventArgs] PlanChanged(Sy... PlanRemoved Event System.EventHandler`1[CloudBerryLab.Backup.API.Utils.PlanRemoveEventArgs] PlanRemoved... CalculateFolderSize Method static long CalculateFolderSize() Equals Method static bool Equals(System.Object objA, System.Object objB) GetAccounts Method static CloudBerryLab.Backup.API.Account[], CloudBerry.Backup.API, Version=1.0.0.1, Cu... GetBackupPlans Method static CloudBerryLab.Backup.API.BackupPlan[], CloudBerry.Backup.API, Version=1.0.0.1,... ReferenceEquals Method static bool ReferenceEquals(System.Object objA, System.Object objB) SetProfilePath Method static System.Void SetProfilePath(string profilePath) ----end of excerpt---- Trying to use static methods fail, I don't know why!!! [CloudBerryLab.Backup.API.BackupProvider]::GetAccounts() Unable to find type [CloudBerryLab.Backup.API.BackupProvider]: make sure that the assembly containing this type is load ed. At line:1 char:42 + [CloudBerryLab.Backup.API.BackupProvider] <<<< ::GetAccounts() + CategoryInfo : InvalidOperation: (CloudBerryLab.Backup.API.BackupProvider:String) [], RuntimeException + FullyQualifiedErrorId : TypeNotFound Any guidance appreciated!!

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >