Search Results

Search found 6149 results on 246 pages for 'bugzy bug'.

Page 236/246 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • Order of parts in SMTP multipart messages

    - by Chris
    Hi, I'd like to know how to build an SMTP multipart message in the correct order so that it will render correctly on the iPhone mail client (rendering correctly in GMail). I'm using Javamail to build up an email containing the following parts: A body part with content type "text/html; UTF-8" An embedded image attachment. A file attachment I am sending the mail via GMail SMTP (via SSL) and the mail is sent and rendered correctly using a GMail account, however, the mail does not render correctly on the iPhone mail client. On the iPhone mail client, the image is rendered before the "Before Image" text when it should be rendered afterwards. After the "Before Image" text there is an icon with a question mark (I assume it means it couldn't find the referenced CID). I'm not sure if this is a limitation of the iPhone mail client or a bug in my mail sending code (I strongly assume the latter). I think that perhaps the headers on my parts might by incorrect or perhaps I am providing the multiparts in the wrong order. I include the text of the received mail as output by gmail (which renders the file correc Message-ID: <[email protected]> Subject: =?UTF-8?Q?Test_from_=E3=82=AF=E3=83=AA=E3=82=B9?= MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_0_20870565.1274154021755" ------=_Part_0_20870565.1274154021755 Content-Type: application/octet-stream Content-Transfer-Encoding: base64 Content-ID: <20100518124021763_368238_0> iVBORw0K ----- TRIMMED FOR CONCISENESS 6p1VVy4alAAAAABJRU5ErkJggg== ------=_Part_0_20870565.1274154021755 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 7bit <html><head><title>Employees Favourite Foods</title> <style> body { font: normal 8pt arial; } th { font: bold 8pt arial; white-space: nowrap; } td { font: normal 8pt arial; white-space: nowrap; } </style></head><body> Before Image<br><img src="cid:20100518124021763_368238_0"> After Image<br><table border="0"> <tr> <th colspan="4">Employees Favourite Foods</th> </tr> <tr> <th align="left">Name</th><th align="left">Age</th><th align="left">Tel.No</th><th align="left">Fav.Food</th> </tr> <tr style="background-color:#e0e0e0"> <td>Chris</td><td>34</td><td>555-123-4567</td><td>Pancakes</td> </tr> </table></body></html> ------=_Part_0_20870565.1274154021755 Content-Type: text/plain; charset=us-ascii; name=textfile.txt Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename=textfile.txt This is a textfile with numbers counting from one to ten beneath this line: one two three four five six seven eight nine ten(no trailing carriage return) ------=_Part_0_20870565.1274154021755-- Even if you can't assist me with this, I would appreciate it if any members of the forum could forward me a (non-personal) mail that includes inline images (not external hyperlinked images though). I just need to find a working sample then I can move past this. Thanks, Chris.

    Read the article

  • VFS: file-max limit 1231582 reached

    - by Rick Koshi
    I'm running a Linux 2.6.36 kernel, and I'm seeing some random errors. Things like ls: error while loading shared libraries: libpthread.so.0: cannot open shared object file: Error 23 Yes, my system can't consistently run an 'ls' command. :( I note several errors in my dmesg output: # dmesg | tail [2808967.543203] EXT4-fs (sda3): re-mounted. Opts: (null) [2837776.220605] xv[14450] general protection ip:7f20c20c6ac6 sp:7fff3641b368 error:0 in libpng14.so.14.4.0[7f20c20a9000+29000] [4931344.685302] EXT4-fs (md16): re-mounted. Opts: (null) [4982666.631444] VFS: file-max limit 1231582 reached [4982666.764240] VFS: file-max limit 1231582 reached [4982767.360574] VFS: file-max limit 1231582 reached [4982901.904628] VFS: file-max limit 1231582 reached [4982964.930556] VFS: file-max limit 1231582 reached [4982966.352170] VFS: file-max limit 1231582 reached [4982966.649195] top[31095]: segfault at 14 ip 00007fd6ace42700 sp 00007fff20746530 error 6 in libproc-3.2.8.so[7fd6ace3b000+e000] Obviously, the file-max errors look suspicious, being clustered together and recent. # cat /proc/sys/fs/file-max 1231582 # cat /proc/sys/fs/file-nr 1231712 0 1231582 That also looks a bit odd to me, but the thing is, there's no way I have 1.2 million files open on this system. I'm the only one using it, and it's not visible to anyone outside the local network. # lsof | wc 16046 148253 1882901 # ps -ef | wc 574 6104 44260 I saw some documentation saying: file-max & file-nr: The kernel allocates file handles dynamically, but as yet it doesn't free them again. The value in file-max denotes the maximum number of file- handles that the Linux kernel will allocate. When you get lots of error messages about running out of file handles, you might want to increase this limit. Historically, the three values in file-nr denoted the number of allocated file handles, the number of allocated but unused file handles, and the maximum number of file handles. Linux 2.6 always reports 0 as the number of free file handles -- this is not an error, it just means that the number of allocated file handles exactly matches the number of used file handles. Attempts to allocate more file descriptors than file-max are reported with printk, look for "VFS: file-max limit reached". My first reading of this is that the kernel basically has a built-in file descriptor leak, but I find that very hard to believe. It would imply that any system in active use needs to be rebooted every so often to free up the file descriptors. As I said, I can't believe this would be true, since it's normal to me to have Linux systems stay up for months (even years) at a time. On the other hand, I also can't believe that my nearly-idle system is holding over a million files open. Does anyone have any ideas, either for fixes or further diagnosis? I could, of course, just reboot the system, but I don't want this to be a recurring problem every few weeks. As a stopgap measure, I've quit Firefox, which was accounting for almost 2000 lines of lsof output (!) even though I only had one window open, and now I can run 'ls' again, but I doubt that will fix the problem for long. (edit: Oops, spoke too soon. By the time I finished typing out this question, the symptom was/is back) Thanks in advance for any help. And another update: My system was basically unusable, so I decided I had no option but to reboot. But before I did, I carefully quit one process at a time, checking /proc/sys/fs/file-nr after each termination. I found that, predictably, the number of open files gradually went down as I closed things down. Unfortunately, it wasn't a large effect. Yes, I was able to clear up 5000-10000 open files, but there were still over 1.2 million left. I shut down just about everything. All interactive shells, except for the one ssh I left open to finish closing down, httpd, even nfs service. Basically everything in the process table that wasn't a kernel process, and there were still an appalling number of files apparently left open. After the reboot, I found that /proc/sys/fs/file-nr showed about 2000 files open, which is much more reasonable. Starting up 2 Xvnc sessions as usual, along with the dozen or so monitoring windows I like to keep open, brought the total up to about 4000 files. I can see nothing wrong with that, of course, but I've obviously failed to identify the root cause. I'm still looking for ideas, since I definitely expect it to happen again. And another update, the next day: I watched the system carefully, and discovered that /proc/sys/fs/file-nr showed a growth of about 900 open files per hour. I shut down the system's only NFS client for the night, and the growth stopped. Mind you, it didn't free up the resources, but it did at least stop consuming more. Is this a known bug with NFS? I'll be bringing the NFS client back online today, and I'll narrow it down further. If anyone is familiar with this behavior, feel free to jump in with "Yeah, NFS4 has this problem, go back to NFS3" or something like that.

    Read the article

  • handling filename* parameters with spaces via RFC 5987 results in '+' in filenames

    - by Peter Friend
    I have some legacy code I am dealing with (so no I can't just use a URL with an encoded filename component) that allows a user to download a file from our website. Since our filenames are often in many different languages they are all stored as UTF-8. I wrote some code to handle the RFC5987 conversion to a proper filename* parameter. This works great until I have a filename with non-ascii characters and spaces. Per RFC, the space character is not part of attr_char so it gets encoded as %20. I have new versions of Chrome as well as Firefox and they are all converting to %20 to + on download. I have tried not encoding the space and putting the encoded filename in quotes and get the same result. I have sniffed the response coming from the server to verify that the servlet container wasn't mucking with my headers and they look correct to me. The RFC even has examples that contain %20. Am I missing something, or do all of these browsers have a bug related to this? Many thanks in advance. The code I use to encode the filename is below. Peter public static boolean bcsrch(final char[] chars, final char c) { final int len = chars.length; int base = 0; int last = len - 1; /* Last element in table */ int p; while (last >= base) { p = base + ((last - base) >> 1); if (c == chars[p]) return true; /* Key found */ else if (c < chars[p]) last = p - 1; else base = p + 1; } return false; /* Key not found */ } public static String rfc5987_encode(final String s) { final int len = s.length(); final StringBuilder sb = new StringBuilder(len << 1); final char[] digits = {'0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'}; final char[] attr_char = {'!','#','$','&','\'','+','-','.','0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z','^','_','a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z','|', '~'}; for (int i = 0; i < len; ++i) { final char c = s.charAt(i); if (bcsrch(attr_char, c)) sb.append(c); else { final char[] encoded = {'%', 0, 0}; encoded[1] = digits[0x0f & (c >>> 4)]; encoded[2] = digits[c & 0x0f]; sb.append(encoded); } } return sb.toString(); } Update Here is a screen shot of the download dialog I get for a file with Chinese characters with spaces as mentioned in my comment.

    Read the article

  • The maven assembly plugin is not using the finalName for installing with attach=true?

    - by Roland Wiesemann
    I have configured following assembly: <build> <plugins> <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>2.2-beta-5</version> <executions> <execution> <id>${project.name}-test-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> <configuration> <appendAssemblyId>false</appendAssemblyId> <finalName>${project.name}-test</finalName> <filters> <filter>src/assemble/test/distribution.properties</filter> </filters> <descriptors> <descriptor>src/assemble/distribution.xml</descriptor> </descriptors> <attach>true</attach> </configuration> </execution> <execution> <id>${project.name}-prod-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> <configuration> <appendAssemblyId>false</appendAssemblyId> <finalName>${project.name}-prod</finalName> <filters> <filter>src/assemble/prod/distribution.properties</filter> </filters> <descriptors> <descriptor>src/assemble/distribution.xml</descriptor> </descriptors> <attach>true</attach> </configuration> </execution> </executions> </plugin> </plugins> </build> This produced two zip-files: distribution-prod.zip distribution-test.zip My expectation for the property attach=true is, that the two zip-files are installed with the name as given in property finalName. But the result is, only one file is installed (attached) to the artifact. The maven protocol is: distrib-0.1-SNAPSHOT.zip distrib-0.1-SNAPSHOT.zip The plugin is using the artifact-id instead of property finalName! Is this a bug? The last installation is overwriting the first one. What can i do to install this two files with different names? Thanks for your investigation. Roland

    Read the article

  • Rx IObservable buffering to smooth out bursts of events

    - by Dan
    I have an Observable sequence that produces events in rapid bursts (ie: five events one right after another, then a long delay, then another quick burst of events, etc.). I want to smooth out these bursts by inserting a short delay between events. Imagine the following diagram as an example: Raw: --oooo--------------ooooo-----oo----------------ooo| Buffered: --o--o--o--o--------o--o--o--o--o--o--o---------o--o--o| My current approach is to generate a metronome-like timer via Observable.Interval() that signals when it's ok to pull another event from the raw stream. The problem is that I can't figure out how to then combine that timer with my raw unbuffered observable sequence. IObservable.Zip() is close to doing what I want, but it only works so long as the raw stream is producing events faster than the timer. As soon as there is a significant lull in the raw stream, the timer builds up a series of unwanted events that then immediately pair up with the next burst of events from the raw stream. Ideally, I want an IObservable extension method with the following function signature that produces the bevaior I've outlined above. Now, come to my rescue StackOverflow :) public static IObservable<T> Buffered(this IObservable<T> src, TimeSpan minDelay) PS. I'm brand new to Rx, so my apologies if this is a trivially simple question... 1. Simple yet flawed approach Here's my initial naive and simplistic solution that has quite a few problems: public static IObservable<T> Buffered<T>(this IObservable<T> source, TimeSpan minDelay) { Queue<T> q = new Queue<T>(); source.Subscribe(x => q.Enqueue(x)); return Observable.Interval(minDelay).Where(_ => q.Count > 0).Select(_ => q.Dequeue()); } The first obvious problem with this is that the IDisposable returned by the inner subscription to the raw source is lost and therefore the subscription can't be terminated. Calling Dispose on the IDisposable returned by this method kills the timer, but not the underlying raw event feed that is now needlessly filling the queue with nobody left to pull events from the queue. The second problem is that there's no way for exceptions or end-of-stream notifications to be propogated through from the raw event stream to the buffered stream - they are simply ignored when subscribing to the raw source. And last but not least, now I've got code that wakes up periodically regardless of whether there is actually any work to do, which I'd prefer to avoid in this wonderful new reactive world. 2. Way overly complex appoach To solve the problems encountered in my initial simplistic approach, I wrote a much more complicated function that behaves much like IObservable.Delay() (I used .NET Reflector to read that code and used it as the basis of my function). Unfortunately, a lot of the boilerplate logic such as AnonymousObservable is not publicly accessible outside the system.reactive code, so I had to copy and paste a lot of code. This solution appears to work, but given its complexity, I'm less confident that its bug free. I just can't believe that there isn't a way to accomplish this using some combination of the standard Reactive extensions. I hate feeling like I'm needlessly reinventing the wheel, and the pattern I'm trying to build seems like a fairly standard one.

    Read the article

  • Problem upgrading kernel on debian 3.1

    - by exhuma
    Hi, I have a quite old box in a remote server farm. So I have no direct access. Only remote SSH (and via SSH to a serial console). I haven't updated this box in ages. Now, whenever I want to install a new package, a dependency to glibc appears. Unfortunately, the install of glibc depends on a 2.6 kernel and I am running a venerable 2.4 kernel (one more reason to upgrade). The problem is, that the install of a new kernel has an indirect (over locales) dependency to glibc. So, to install glibc, I need a new kernel. For a new kernel, I need to upgrade glibc. Essentially I am blocked. What's the best way to proceed considering I have no "hardware" access? Here's a quick transcript of the upgrade process: [green:~]% sudo aptitude install linux-image-686 Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done The following packages are unused and will be REMOVED: gcc-4.3-base The following NEW packages will be automatically installed: dash libc6-i686 libparse-recdescent-perl linux-image-2.6-686 linux-image-2.6.18-6-686 module-init-tools yaird The following packages have been kept back: adduser apache2 apache2-mpm-prefork apache2-utils apache2.2-common apt apt-utils aptitude autoconf autotools-dev awstats base-files base-passwd [...snip...] util-linux vacation vim vim-common wamerican wbritish wget whiptail whois wwwconfig-common zlib1g The following NEW packages will be installed: dash libc6-i686 libparse-recdescent-perl linux-image-2.6-686 linux-image-2.6.18-6-686 linux-image-686 module-init-tools yaird The following packages will be upgraded: hotplug libc6 2 packages upgraded, 8 newly installed, 1 to remove and 277 not upgraded. Need to get 0B/22.7MB of archives. After unpacking 52.1MB will be used. Do you want to continue? [Y/n/?] Writing extended state information... Done Preconfiguring packages ... (Reading database ... 34065 files and directories currently installed.) Preparing to replace libc6 2.3.6.ds1-13 (using .../libc6_2.7-18lenny2_i386.deb) ... Checking for services that may need to be restarted... Checking init scripts... WARNING: init script for postgresql not found. [ --- libc6 config screen appears here --- ] WARNING: POSIX threads library NPTL requires kernel version 2.6.8 or later. If you use a kernel 2.4, please upgrade it before installing glibc. The installation of a 2.6 kernel _could_ ask you to install a new libc first, this is NOT a bug, and should *NOT* be reported. In that case, please add etch sources to your /etc/apt/sources.list and run: apt-get install -t etch linux-image-2.6 Then reboot into this new kernel, and proceed with your upgrade dpkg: error processing /var/cache/apt/archives/libc6_2.7-18lenny2_i386.deb (--unpack): subprocess pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/libc6_2.7-18lenny2_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Ack! Something bad happened while installing packages. Trying to recover: dpkg: dependency problems prevent configuration of locales: locales depends on glibc-2.7-1; however: Package glibc-2.7-1 is not installed. dpkg: error processing locales (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: locales Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done Now, if I follow the instrunctions as promted I get the following. Note that I am using aptitude instead of apt-get to benefit from the better dependency tracking. I did try with apt-get first. But that let me to the same problem. [green:~]% sudo aptitude install -t etch linux-image-2.6.26-2-686 Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done E: Unable to correct problems, you have held broken packages. E: Unable to correct dependencies, some packages cannot be installed E: Unable to resolve some dependencies! Some packages had unmet dependencies. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following packages have unmet dependencies: linux-image-2.6.26-2-686: Depends: initramfs-tools (>= 0.55) but it is not installable or yaird (>= 0.0.13) but it is not installable or linux-initramfs-tool which is a virtual package. Any ideas?

    Read the article

  • JSTL c:forEach causes @ViewScoped bean to invoke @PostConstruct on every request

    - by Nitesh Panchal
    Hello, Again i see that the @PostConstruct is firing every time even though no binding attribute is used. See this code :- <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:c="http://java.sun.com/jsp/jstl/core"> <h:head> <title>Facelet Title</title> </h:head> <h:body> <h:form> <c:forEach var="item" items="#{TestBean.listItems}"> <h:outputText value="#{item}"/> </c:forEach> <h:commandButton value="Click" actionListener="#{TestBean.actionListener}"/> </h:form> </h:body> </html> And this is the simplest possible bean in JSF :- package managedBeans; import java.io.Serializable; import java.util.ArrayList; import java.util.List; import javax.annotation.PostConstruct; import javax.faces.bean.ManagedBean; import javax.faces.bean.ViewScoped; @ManagedBean(name="TestBean") @ViewScoped public class TestBean implements Serializable { private List<String> listItems; public List<String> getListItems() { return listItems; } public void setListItems(List<String> listItems) { this.listItems = listItems; } public TestBean() { } @PostConstruct public void init(){ System.out.println("Post Construct fired!"); listItems = new ArrayList<String>(); listItems.add("Mango"); listItems.add("Apple"); listItems.add("Banana"); } public void actionListener(){ System.out.println("Action Listener fired!"); } } Do you see any behaviour that should cause postconstruct callback to fire each time? I think JSF 2.0 is highly unstable. If it has to fire PostConstruct each and every time what purpose does @ViewScoped serve. Why not to use @RequestScoped only? I thought i have made some mistake in my application. But when i created this simplest possible in JSF, i still get this error. Am i not understanding the scopes of JSF? or are they not testing it properly? Further, if you remove c:forEach and replace it with ui:repeat, then it works fine. Waiting for replies to confirm whether it is bug or it is intentional to stop the programmers from using jstl?

    Read the article

  • Round-twice error in .NET's Double.ToString method

    - by Jeppe Stig Nielsen
    Mathematically, consider for this question the rational number 8725724278030350 / 2**48 where ** in the denominator denotes exponentiation, i.e. the denominator is 2 to the 48th power. (The fraction is not in lowest terms, reducible by 2.) This number is exactly representable as a System.Double. Its decimal expansion is 31.0000000000000'49'73799150320701301097869873046875 (exact) where the apostrophes do not represent missing digits but merely mark the boudaries where rounding to 15 resp. 17 digits is to be performed. Note the following: If this number is rounded to 15 digits, the result will be 31 (followed by thirteen 0s) because the next digits (49...) begin with a 4 (meaning round down). But if the number is first rounded to 17 digits and then rounded to 15 digits, the result could be 31.0000000000001. This is because the first rounding rounds up by increasing the 49... digits to 50 (terminates) (next digits were 73...), and the second rounding might then round up again (when the midpoint-rounding rule says "round away from zero"). (There are many more numbers with the above characteristics, of course.) Now, it turns out that .NET's standard string representation of this number is "31.0000000000001". The question: Isn't this a bug? By standard string representation we mean the String produced by the parameterles Double.ToString() instance method which is of course identical to what is produced by ToString("G"). An interesting thing to note is that if you cast the above number to System.Decimal then you get a decimal that is 31 exactly! See this Stack Overflow question for a discussion of the surprising fact that casting a Double to Decimal involves first rounding to 15 digits. This means that casting to Decimal makes a correct round to 15 digits, whereas calling ToSting() makes an incorrect one. To sum up, we have a floating-point number that, when output to the user, is 31.0000000000001, but when converted to Decimal (where 29 digits are available), becomes 31 exactly. This is unfortunate. Here's some C# code for you to verify the problem: static void Main() { const double evil = 31.0000000000000497; string exactString = DoubleConverter.ToExactString(evil); // Jon Skeet, http://csharpindepth.com/Articles/General/FloatingPoint.aspx Console.WriteLine("Exact value (Jon Skeet): {0}", exactString); // writes 31.00000000000004973799150320701301097869873046875 Console.WriteLine("General format (G): {0}", evil); // writes 31.0000000000001 Console.WriteLine("Round-trip format (R): {0:R}", evil); // writes 31.00000000000005 Console.WriteLine(); Console.WriteLine("Binary repr.: {0}", String.Join(", ", BitConverter.GetBytes(evil).Select(b => "0x" + b.ToString("X2")))); Console.WriteLine(); decimal converted = (decimal)evil; Console.WriteLine("Decimal version: {0}", converted); // writes 31 decimal preciseDecimal = decimal.Parse(exactString, CultureInfo.InvariantCulture); Console.WriteLine("Better decimal: {0}", preciseDecimal); // writes 31.000000000000049737991503207 } The above code uses Skeet's ToExactString method. If you don't want to use his stuff (can be found through the URL), just delete the code lines above dependent on exactString. You can still see how the Double in question (evil) is rounded and cast.

    Read the article

  • Cluster failover and strange gratuitous arp behavior

    - by lazerpld
    I am experiencing a strange Windows 2008R2 cluster related issue that is bothering me. I feel that I have come close as to what the issue is, but still don't fully understand what is happening. I have a two node exchange 2007 cluster running on two 2008R2 servers. The exchange cluster application works fine when running on the "primary" cluster node. The problem occurs when failing over the cluster ressource to the secondary node. When failing over the cluster to the "secondary" node, which for instance is on the same subnet as the "primary", the failover initially works ok and the cluster ressource continues to work for a couple of minutes on the new node. Which means that the recieving node does send out a gratuitous arp reply packet that updated the arp tables on the network. But after x amount of time (typically within 5 minutes time) something updates the arp-tables again because all of a sudden the cluster service does not answer to pings. So basically I start a ping to the exchange cluster address when its running on the "primary node". It works just great. I failover the cluster ressource group to the "secondary node" and I only have loss of one ping which is acceptable. The cluster ressource still answers for some time after being failed over and all of a sudden the ping starts timing out. This is telling me that the arp table initially is updated by the secondary node, but then something (which I haven't found out yet) wrongfully updates it again, probably with the primary node's MAC. Why does this happen - has anyone experienced the same problem? The cluster is NOT running NLB and the problem stops immidiately after failing over back to the primary node where there are no problems. Each node is using NIC teaming (intel) with ALB. Each node is on the same subnet and has gateway and so on entered correctly as far as I am concerned. Edit: I was wondering if it could be related to network binding order maybe? Because I have noticed that the only difference I can see from node to node is when showing the local arp table. On the "primary" node the arp table is generated on the cluster address as the source. While on the "secondary" its generated from the nodes own network card. Any input on this? Edit: Ok here is the connection layout. Cluster address: A.B.6.208/25 Exchange application address: A.B.6.212/25 Node A: 3 physical nics. Two teamed using intels teaming with the address A.B.6.210/25 called public The last one used for cluster traffic called private with 10.0.0.138/24 Node B: 3 physical nics. Two teamed using intels teaming with the address A.B.6.211/25 called public The last one used for cluster traffic called private with 10.0.0.139/24 Each node sits in a seperate datacenter connected together. End switches being cisco in DC1 and NEXUS 5000/2000 in DC2. Edit: I have been testing a little more. I have now created an empty application on the same cluster, and given it another ip address on the same subnet as the exchange application. After failing this empty application over, I see the exact same problem occuring. After one or two minutes clients on other subnets cannot ping the virtual ip of the application. But while clients on other subnets cannot, another server from another cluster on the same subnet has no trouble pinging. But if i then make another failover to the original state, then the situation is the opposite. So now clients on same subnet cannot, and on other they can. We have another cluster set up the same way and on the same subnet, with the same intel network cards, the same drivers and same teaming settings. Here we are not seeing this. So its somewhat confusing. Edit: OK done some more research. Removed the NIC teaming of the secondary node, since it didnt work anyway. After some standard problems following that, I finally managed to get it up and running again with the old NIC teaming settings on one single physical network card. Now I am not able to reproduce the problem described above. So it is somehow related to the teaming - maybe some kind of bug? Edit: Did some more failing over without being able to make it fail. So removing the NIC team looks like it was a workaround. Now I tried to reestablish the intel NIC teaming with ALB (as it was before) and i still cannot make it fail. This is annoying due to the fact that now i actually cannot pinpoint the root of the problem. Now it just seems to be some kind of MS/intel hick-up - which is hard to accept because what if the problem reoccurs in 14 days? There is a strange thing that happened though. After recreating the NIC team I was not able to rename the team to "PUBLIC" which the old team was called. So something has not been cleaned up in windows - although the server HAS been restarted! Edit: OK after restablishing the ALB teaming the error came back. So I am now going to do some thorough testing and i will get back with my observations. One thing is for sure. It is related to Intel 82575EB NICS, ALB and Gratuitous Arp.

    Read the article

  • Why so Long time span in creating Session Factory?

    - by vijay.shad
    Hi My project is web application running in the tomcat container. This application is a spring framework based hibernate application. The problem with this is it takes a lot of time when creates session factory. here is the logs 2010-04-15 23:05:28,053 DEBUG [SessionFactoryImpl] Session factory constructed with filter configurations : {} 2010-04-15 23:05:28,053 DEBUG [SessionFactoryImpl] instantiating session factory with properties: {java.vendor=Sun Microsystems Inc., sun.java.launcher=SUN_STANDARD, catalina.base=/usr/local/InstalledPrograms/apache-tomcat-6.0.20, sun.management.compiler=HotSpot Tiered Compilers, catalina.useNaming=true, os.name=Linux, sun.boot.class.path=/usr/java/jdk1.6.0_17/jre/lib/resources.jar:/usr/java/jdk1.6.0_17/jre/lib/rt.jar:/usr/java/jdk1.6.0_17/jre/lib/sunrsasign.jar:/usr/java/jdk1.6.0_17/jre/lib/jsse.jar:/usr/java/jdk1.6.0_17/jre/lib/jce.jar:/usr/java/jdk1.6.0_17/jre/lib/charsets.jar:/usr/java/jdk1.6.0_17/jre/classes, java.util.logging.config.file=/usr/local/InstalledPrograms/apache-tomcat-6.0.20/conf/logging.properties, java.vm.specification.vendor=Sun Microsystems Inc., hibernate.generate_statistics=true, java.runtime.version=1.6.0_17-b04, hibernate.cache.provider_class=org.hibernate.cache.EhCacheProvider, user.name=root, shared.loader=, tomcat.util.buf.StringCache.byte.enabled=true, hibernate.connection.release_mode=auto, user.language=en, java.naming.factory.initial=org.apache.naming.java.javaURLContextFactory, sun.boot.library.path=/usr/java/jdk1.6.0_17/jre/lib/i386, java.version=1.6.0_17, java.util.logging.manager=org.apache.juli.ClassLoaderLogManager, user.timezone=Canada/Pacific, sun.arch.data.model=32, java.endorsed.dirs=/usr/local/InstalledPrograms/apache-tomcat-6.0.20/endorsed, sun.cpu.isalist=, sun.jnu.encoding=UTF-8, file.encoding.pkg=sun.io, package.access=sun.,org.apache.catalina.,org.apache.coyote.,org.apache.tomcat.,org.apache.jasper.,sun.beans., file.separator=/, java.specification.name=Java Platform API Specification, java.class.version=50.0, user.country=US, java.home=/usr/java/jdk1.6.0_17/jre, java.vm.info=mixed mode, os.version=2.6.18-128.el5, path.separator=:, java.vm.version=14.3-b01, hibernate.jdbc.batch_size=25, java.awt.printerjob=sun.print.PSPrinterJob, sun.io.unicode.encoding=UnicodeLittle, package.definition=sun.,java.,org.apache.catalina.,org.apache.coyote.,org.apache.tomcat.,org.apache.jasper., java.naming.factory.url.pkgs=org.apache.naming, sun.rmi.dgc.client.gcInterval=3600000, user.home=/root, java.specification.vendor=Sun Microsystems Inc., java.library.path=/usr/java/jdk1.6.0_17/jre/lib/i386/server:/usr/java/jdk1.6.0_17/jre/lib/i386:/usr/java/jdk1.6.0_17/jre/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib, java.vendor.url=http://java.sun.com/, java.vm.vendor=Sun Microsystems Inc., hibernate.dialect=org.hibernate.dialect.MySQL5InnoDBDialect, sun.rmi.dgc.server.gcInterval=3600000, common.loader=${catalina.home}/lib,${catalina.home}/lib/*.jar, java.runtime.name=Java(TM) SE Runtime Environment, java.class.path=:/usr/local/InstalledPrograms/apache-tomcat-6.0.20/bin/bootstrap.jar, hibernate.bytecode.use_reflection_optimizer=false, java.vm.specification.name=Java Virtual Machine Specification, java.vm.specification.version=1.0, catalina.home=/usr/local/InstalledPrograms/apache-tomcat-6.0.20, sun.cpu.endian=little, sun.os.patch.level=unknown, hibernate.cache.use_query_cache=true, hibernate.connection.provider_class=org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider, java.io.tmpdir=/usr/local/InstalledPrograms/apache-tomcat-6.0.20/temp, java.vendor.url.bug=http://java.sun.com/cgi-bin/bugreport.cgi, server.loader=, os.arch=i386, java.awt.graphicsenv=sun.awt.X11GraphicsEnvironment, java.ext.dirs=/usr/java/jdk1.6.0_17/jre/lib/ext:/usr/java/packages/lib/ext, user.dir=/, line.separator=, java.vm.name=Java HotSpot(TM) Server VM, hibernate.cache.use_second_level_cache=true, file.encoding=UTF-8, java.specification.version=1.6, hibernate.show_sql=true} 2010-04-15 23:08:53,516 DEBUG [AbstractEntityPersister] Static SQL for entity: com.vsd.model.Order There you can see the time delay of more than 3 mins in executing these processes. My database is mysql and database server is running on the local machine only. The container environment is Centos Linux system. I am clueless about why it takes that much of time in executing these process, But when i do the same task from under eclipse it does not take that much of time. Development environment is Windows.

    Read the article

  • uploading zip files in codeigniter won't work

    - by krike
    I have created a helper that requires some parameters and should upload a file, the function works for images however not for zip files. I searched on google and even added a MY_upload.php - http://codeigniter.com/bug_tracker/bug/6780/ however I still have the problem so I used print_r to display the array of the uploaded files, the image is fine however the zip array is empty: Array ( [file_name] => [file_type] => [file_path] => [full_path] => [raw_name] => [orig_name] => [file_ext] => [file_size] => [is_image] => [image_width] => [image_height] => [image_type] => [image_size_str] => ) Array ( [file_name] => 2385b959279b5e3cd451fee54273512c.png [file_type] => image/png [file_path] => I:/wamp/www/e-commerce/sources/images/ [full_path] => I:/wamp/www/e-commerce/sources/images/2385b959279b5e3cd451fee54273512c.png [raw_name] => 2385b959279b5e3cd451fee54273512c [orig_name] => 1269770869_Art_Artdesigner.lv_.png [file_ext] => .png [file_size] => 15.43 [is_image] => 1 [image_width] => 113 [image_height] => 128 [image_type] => png [image_size_str] => width="113" height="128" ) this is the function helper function multiple_upload($name = 'userfile', $upload_dir = 'sources/images/', $allowed_types = 'gif|jpg|jpeg|jpe|png', $size) { $CI =& get_instance(); $config['upload_path'] = realpath($upload_dir); $config['allowed_types'] = $allowed_types; $config['max_size'] = $size; $config['overwrite'] = FALSE; $config['encrypt_name'] = TRUE; $ffiles = $CI->upload->data(); echo "<pre>"; print_r($ffiles); echo "</pre>"; $CI->upload->initialize($config); $errors = FALSE; if(!$CI->upload->do_upload($name))://I believe this is causing the problem but I'm new to codeigniter so no idea where to look for errors $errors = TRUE; else: // Build a file array from all uploaded files $files = $CI->upload->data(); endif; // There was errors, we have to delete the uploaded files if($errors): @unlink($files['full_path']); return false; else: return $files; endif; }//end of multiple_upload() and this is the code in my controller if(!$s_thumb = multiple_upload('small_thumb', 'sources/images/', 'gif|jpg|jpeg|jpe|png', 1024)): //http://www.matisse.net/bitcalc/ $data['feedback'] = '<div class="error">Could not upload the small thumbnail!</div>'; $error = TRUE; endif; if(!$main_file = multiple_upload('main_file', 'sources/items/', 'zip', 307200)): $data['feedback'] = '<div class="error">Could not upload the main file!</div>'; $error = TRUE; endif;

    Read the article

  • Evidence-Based-Scheduling - are estimations only as accurate as the work-plan they're based on?

    - by Assaf Lavie
    I've been using FogBugz's Evidence Based Scheduling (for the uninitiated, Joel explains) for a while now and there's an inherent problem I can't seem to work around. The system is good at telling me the probability that a given project will be delivered at some date, given the detailed list of tasks that comprise the project. However, it does not take into account the fact that during development additional tasks always pop up. Now, there's the garbage-can approach of creating a generic task/scheduled-item for "last minute hacks" or "integration tasks", or what have you, but that clearly goes against the idea of aggregating the estimates of many small cases. It's often the case that during the development stage of a project you realize that there's a whole area your planning didn't cover, because, well, that's the nature of developing stuff that hasn't been developed before. So now your ~3 month project may very well turn into a 6 month project, but not because your estimations were off (you could be the best estimator in the world, for those task the comprised your initial work plan); rather because you ended up adding a whole bunch of new tasks that weren't there to begin with. EBS doesn't help you with that. It could, theoretically (I guess). It could, perhaps, measure the amount of work you add to a project over time and take that into consideration when estimating the time remaining on a given project. Just a thought. In other words, EBS works on a task basis, but not on a project/release basis - but the latter is what's important. It's what your boss typically cares about - delivery date, not the time it takes to finish each task along the way, and not the time it would have taken, if your planning was perfect. So the question is (yes, there's a question here, don't close it): What's your methodology when it comes to using EBS in FogBugz and how do you solve the problem above, which seems to be a main cause of schedule delays and mispredictions? Edit Some more thoughts after reading a few answers: If it comes down to having to choose which delivery date you're comfortable presenting to your higher-ups by squinting at the delivery-probability graph and choosing 80%, or 95%, or 60% (based on what, exactly?) then we've resorted to plain old buffering/factoring of our estimates. In which case, couldn't we have skipped the meticulous case by case hour-sized estimation effort step? By forcing ourselves to break down tasks that take more than a day into smaller chunks of work haven't we just deluded ourselves into thinking our planning is as tight and thorough as it could be? People may be consistently bad estimators that do not even learn from their past mistakes. In that respect, having an EBS system is certainly better than not having one. But what can we do about the fact that we're not that good in planning as well? I'm not sure it's a problem that can be solved by a similar system. Our estimates are wrong because of tendencies to be overly optimistic/pessimistic about certain tasks, and because of neglect to account for systematic delays (e.g. sick days, major bug crisis) - and usually not because we lack knowledge about the work that needs to be done. Our planning, on the other hand, is often incomplete because we simply don't have enough knowledge in this early stage; and I don't see how an EBS-like system could fill that gap. So we're back to methodology. We need to find a way to accommodate bad or incomplete work plans that's better than voodoo-multiplication.

    Read the article

  • Google Drive Update keeps throwing an error when I try to save a thumbnail

    - by Dano64
    The Base64 string checks out fine. I was able to export it to another website and download it again as my image. When I try to do an update using the Drive Javascript api, it just keeps returning this error: Invalidvaluefor:Notavalidbase64bytestring I also true making the string URL safe. Per this page https://google-api-client-libraries.appspot.com/documentation/drive/v2/python/latest/drive_v2.files.html Am I doing something wrong here, the documentation says send Base64, the string is valid and, the string is intact throughout the process, but Google will not accept it? I am using the javascript api, I think there is maybe a bug sending the thumbnail using the javascript api. This is the request Request URL:https://www.googleapis.com/upload/drive/v2/files/?uploadType=multipart Request Method:POST Status Code:400 Bad Request **Request Headers** :host:www.googleapis.com :method:POST :path:/upload/drive/v2/files/?uploadType=multipart :scheme:https :version:HTTP/1.1 accept:*/* accept-encoding:gzip,deflate,sdch accept-language:en-US,en;q=0.8 authorization:Bearer ya29.AHES6ZQr...YXDacdY4 content-length:14313 content-type:multipart/mixed; boundary="--mpart_delim" origin:https://www.googleapis.com x-javascript-user-agent:google-api-javascript-client/1.1.0-beta x-origin:https://app.pinteract.com x-referer:https://app.pinteract.com **Query String Parameters** uploadType:multipart **Request Payload** ----mpart_delim Content-Type: application/json {"id":null,"title":"Test Pinup.pint","mimeType":"application/vnd.pinteract.pint","thumbnail":{"mimeType":"image/png","image":"iVBORw0KGgoAAAANSUh...UVORK5CYII%3D"}} ----mpart_delim Content-Type: application/vnd.pinteract.pint Content-Transfer-Encoding: binary { "header" : {"id":"215A660A"} "members" : [ {"id":"100523752012631912873"} ], "manifest" : [ {"id":"0000","ele":[],"own":"100523752012631912873","dtc":1371679680000,"txt":"&Delta; Created","typ":0} ], "elements" : [ {"id":"0F54","x":560,"y":264,"bak":"#44ff44","own":"100523752012631912873","srt":"544","sta":0,"wid":120,"hgt":120,"dtc":0,"rec":"","txt":"This is Note"} ] } ----mpart_delim-- **Response Headers** content-length:10848 content-type:application/json date:Mon, 01 Jul 2013 14:41:33 GMT server:HTTP Upload Server Built on Jun 25 2013 11:32:14 (1372185134) status:400 Bad Request version:HTTP/1.1 This is the Response { "error": { "errors": [ { "domain": "global", "reason": "invalid", "message": "Invalid value for: Not a valid base64 byte string: iVBORw0KGgoAAAANSUhEUg...AASUVORK5CYII%3D" } ], "code": 400, "message": "Invalid value for: Not a valid base64 byte string: iVBORw0KGgoAAAANSU...lRtkAAAAASUVORK5CYII%3D" } } Raw Base64 String iVBORw0KGgoAAAANSUhEUgAAAGAAAABgCAYAAADimHc4AAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAADjVJREFUeNrsXX2MHVUVP3fevH27SxfKhyBCsIqoKOhCBBISY5Hyj2JSGuXLP6SJwYAYICgxsUBr1AhqbE2I8Q9jjYE/CLFbP4IxVbaIYjAmS0wUC6ELRGy7tPt2t9vd997MPd6Z+3Xunfsq1r7tvNe56TAfO2/ezO93zu/ce+6ZB0DVqla1qlWtalWrWtWqVrWqVa1qJ1Nj/XCTm5+bHeUcz9+3f25tmvLVSYePIyLIBRhX24yxKXH63Kljw5Ni97UfbVhzpCLgGNs3/jp//r9en72l3UnWCsCvSjrpmQA54Nkq28q3FQn5mucL5OuIsUPi6f5Si6KJVasaT26/+T1vVgS8hfbl3+27br65dE+r1VknrH6IIfkjolmh/A8hwYJv1lySInYXhXc8W49r235++yVPVQQE2lee3n/d3OzS5tZS+2oWurlMYgj4qI45HsApCZYAzgFSsU7FfhSx5+txtHnnHR9+qiJAtPsnD1y4MLf09aXF9q3ZzTAmb8ohQaKtHEDLkGv9qIBHBbgmICXrlKyjKHoirkWbfn3X+EsnLQH37dr3mebs4o8x4WMSfGbAtyQoDSLga8BBeQJqqXE8QQIt12D2syVR2+LcxVpcu/23d1/++ElFwPdeXB555eWZ7x453LozEvuRtvqMgCyAhm7QyI7Sf7BEcI4mCNP9lBKhrD9R2zkJqVwLWfpOYyje8tSXLlsceAIy8Pf8Y98vOq1kXQZ8JG5By06kLZ/5+m8DcEiCMt3XXpCTwN0YkKrYkFu/0CfpCSC3M1IEEeILJ0ca9et/c/fliwNLQAb+SxR8JTkRJUAdy4DOtxCdQGyDb6ALyq0HuEHYyo+OAYlaUkVI7hkAk8ON+Ppd935kceAIMOC3BfjiayUB1gO0FAG4QTjfN9pTjAO0/0+9ICeDBmGyJDQoEzIyjxCr3YKET/7+vitWhIRopQh4+Z/7fyAtX4PPQG/X1JLt19Rit8XfIjDb7uftdWqa0Ox8sNfIF/CvK4mPFAB6yf4uaPzYYit5dKVwWREC7vrV619sLXU+L8Gg4AEBlXgFBdN8JgQ6M+fUHJDJNWiQD1yHkXOlDLLMcz53xTf/dNtASNBXn5l598y++b+L3k2j5gFdU1KkA2/kdEOR9IDsOpSCQBN4M/3nJuCaOEC03pEhJTuJJ0XtbEHsxBG75PmvXb2nrz3gzZnD2wVCjQgYsUAgVm2lRwdiR6LA9w4lS2YNnlcRq/avB653MfrdnueJf/XllP+kryXozl++toF30o9GnoVrvZU9HtINNaD44BFgHSmx4DP/swRkKz9F3WcGCOt9NQ0MwtWXbnn2xr4loL2cbNYDLAdcQgTz9FpbtAaUERANsJEFklHvIOMISiBzrqlJD4w/yKJJEL2lh/qSgDt2vrohTdJL/QdzBlvMtUQ6Io6cXgwBngZWD/xIeZVeM49o5g30KEGh+1Td4g+8f/Mfbuw7Atotbf0EVOchmQuGts5APsh4UNCbSBoDPE/oYt3B48zekzEIVF7Ae+cFPSHgCxOvXpwm/FLHslQ6GZgLfNfFk54iuMwhyAf2aJLTfcHg8cwLLnzwmQ/2DQFCem4pgOR4gzviLQDD3gIRzE1h+PshyaEB3wPYORccImVyUHR1P9s3BCSd9CZ/sOFPsjCV4ymkHViYCJOmYEV5co8xF1DnHrDwfe6cg1wzdZCRE8WY4tN9QYAY9Z4rBj7vzZNpiMWRHqJ5yKMCQcBg3cgJkpFPzjtkRw7x+ruwqxQBkUxNnGgXXbBp8rzSE3BksT3uZDDJUJahD7hKsGFACug+Cx8vegQWRtEsOOzHwO1h8WEQncOiSzpeegLaneQqB2wMEKHWGHhgigUzs2FYsHyAcNzonmfBgneZBfDoORk03nlF6QkQffCL3WdFM4uFSLUWHcuzTqPrTtCdkvSMlwb5ozZVPQHo3g8L+qgjOZ7t5HvHvScUH/ceUMrPrmkTzW+agbRrZswIkXmGKHUbPWCQWDH6shUK8n4g9aVQGQFT3+kYguGdGAoWHOas0nsAqowlnbUyD+BPpoNX56M/R7wAzHkItjLCDaQO1gZXBOcQ3Tego4dzYAsBMBw5yukB8o6ZARiVZSs7l8e8NSOKr60euwRM39J1bZBZG/KsRTuxhRoE+Q6T3gZqNPk0pWUXsfwEaA/IHyQHX1sZM/CiY9OWLH0GJ66ZSwYJuoYgDxMj9QiFeiFbzIWGA3oOBKrvdIWF9j9Z8NUPHoBoQUVq7RJsbnpCTIFNgEeZUANFghmZogWeYVG6bAmirYqgJSwFItC3eNdwqIdwpN7SBx7A0YZdRCo1ECQiMlYvk3WcDJAi7QGBYAzepDyqSXlTpKXWEPAI7sQk7RmWJFPeqGXInNcHHtBKUhiJa/KBsoq0CBSwQJIENg2RP7AqyNIdpYhYnfYI9KYo/SlJWxdKSSHgBTyAQzfZcqc9OfGG0veC4ijaTcsEgdZskqo1W7/jVrbpEhL6N7kPzpwu99dqHljPDaNXqGuvby0cQyQpY+EK8BSsV4i2u/QecLjdmR4biq2Gc5WCZmgSXJwzKeYkqqKyfDqyjZT12VEtFrqZ6FTC0bJ0S7AGMD8PSB2R8RjrOUhK3FPQniIlSpAxXXoCFlrJ1Kp6rLScmVElU2gzf5TFbJIMzawYM8GWBXI4tEoaObrvB5A6UBMLPI8qWj6aLmdeyqjApx6QSqKmjjdePSlL+dQPX5gdjqPVea2OKqqyBVFuXZCukgOg9TvZMVuamGc3/RJFJAMsXqyOowW53CtF0UW6VtK4qpgGaIvtlrhOS6yz8hS5na2xefDha0/vg4FYrsUT4v5vy1PD3JUfSjnKMZvt66PNxWA+k8VtahjdHhCQFzVoebpTC0S2uV+2SOOCkSFl/WitPy9hlN4x0TcTMjOLrZ1GRwMFsno77yVxWjzlWqUTfNH/u1to637GBT8/jxbr6uCOhBR1TqplCKTscNDygzt7gVXPKuM+8ejUbKOWyRCt61TlHqYQihVKSY7+loy0eqZAgkCRrl8d7b8Zk5rXlVSlnC5bF8dasiJOV8Zp6RHb0DzUA/npmQfkwXi5s80CwF2rRbdc0PEQ9eJEwdqJ9yQFj+HBc5MA+Nx5m9LuJ0R+8hhhJCgnd1uvcOoZAS/sn9sqBmVNCnoIzAJwafdj+uUKnvKuwCeBgCutHozM+DEhUYDbNSgScu1vit2tfUfA/CPrmgvtRHqBM4gCByBOQKTAUPAl8FwAb4tpE97dS5JuZFNZUtadKsATQ4RYAIhHwLameJa+IyBrfzswv3WxI7yASIEPjvumilpS1xMK3UjHe3xC1DXQexmDyhEJwh2OLviuJwjgcWsvMep5efr4t55b//ZThnbk4wBw+/6F2n3wS0v8EnUyIYM0tex1Q8l26qUkUjViThT4HZSLDLyQB9+ODLzZuTcI65/oawKyduW3/7zj9OH6+ihEgDcA82t3GEBxMkQBqntFvPC+MDjdS5qm0IGWgt/RwKveT7Yvjk8I8G/oNTbxShDw4qHDGz901tia0bg2jirvo9fcIQC7lJ+gk6RHPdEPUMhcOj9ZAMW353O5kgAroNU2pxKUpxw2rgQ2K/aS3jsf3D1+/qrhp0fE2MCkIzwJ6lYy6Fu/JcCf8QqMB8Baf6JAT8i2lp9EyY441hSfuWbukXVTA0VATsIDu8fPO6Xx9HBGAi1NJ2XkAMUKtcKcJ523JeBz5ycLbMo5VUDbtZYgua/BF72fpvjMNc0VAn/FCcjaBQ9Mjr9jtJF7An1zhQZiO0VvM6VQSEdbDwCPACpDKenVpMbybW9Hb2ejXfHpFQX/hBCQk7Bpcvzs4aEdo3G0hr4fHEGoiNdNRdOUhEuElSOdTEvABt2EkNFx+v3CCwCmUfZ4plYaixP2Yx2n3r9r9QWnNHasrsdrQ68KwX8hAOiEjJ421H18oGkFRQSXg6+O7gUZr4BJcYUbejnYKiUBuq3ZNHnPGY36Q42IrWYBL2B+4Szt/2vZAfJDTd6kigZeSw7J82R6v2X2kXVbT+Tzl+IHm4Q3rDlnuP6Q6KbeVjfxAPWbis7NojcdqSfWubJ8riUIrAfo1ENiPAS2i2NbRE9n+kQ/e6l+suw0QcSZwhtEL8kQQUvbTb0aLec0lk9Gv2AHYwkhQvzbLpYthx6+drosz1zKH+3LPGIsrq2PDu79/vA5F8p3igMEAFjZQV+CrCxNicM/FdY/UQaL7wsCdBsZGcH6GefB2EVXwsi574Pa294FSeM0aNVPpeMyFYQRhtrzEC0dgnTmFWi9sQcW9k5NLr2x5+PQm7ra/klF/F8WcngG4r1/hLG5F+HsQ+fA6OgoDDUaEMdx9rtv0vLTFNqdDrSWl2FhYQEOHNgPyeysCLOzzAsfpWsRDH5jZfb2gSQA3fQFKzMJJ4sHlJaEeFA9QHlB6G1YLFNMGGAPQPqMrKyeEMNgN5pg5V08AU6kNwwsASQQUy/nXdzkhElSPODgs4DMhkioPKA3RJgfvwoFCN6FkIqA4xwDQgTo90eSKgj3oHGe0q5oHBgVZwR0ql5Qz2MAhjwg+2O7Goj1VvtpD6jugb8AJUrMDagEcToSjgn48yCrzkvTBjYZx7khoK6e82AZgu5JQ4Bqi+oZ/10m3e8bAgSQE8dOQE7CJMh3e5fL+oylJiCKom3HSoBYDqdp+pjYLfX/Ta/UBBw5cmSSMbb1WAhgLNqYJMnLZZfL0seAZrN5rwDzfyGhWa/XNx48+OaT/RCv+iIIT0/vvbfRaFwjvOFoMSErLdxeq9Uu27v3le390mFg0IftpptvXetURXDefPyxn01B1apWtapVrWpVq1rVqla1t9L+I8AAvydNUrElRtkAAAAASUVORK5CYII="}],"code":400,"message":"Invalidvaluefor:Notavalidbase64bytestring:iVBORw0KGgoAAAANSUhEUgAAAGAAAABgCAYAAADimHc4AAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAADjVJREFUeNrsXX2MHVUVP3fevH27SxfKhyBCsIqoKOhCBBISY5Hyj2JSGuXLP6SJwYAYICgxsUBr1AhqbE2I8Q9jjYE/CLFbP4IxVbaIYjAmS0wUC6ELRGy7tPt2t9vd997MPd6Z+3Xunfsq1r7tvNe56TAfO2/ezO93zu/ce+6ZB0DVqla1qlWtalWrWtWqVrWqVa1qJ1Nj/XCTm5+bHeUcz9+3f25tmvLVSYePIyLIBRhX24yxKXH63Kljw5Ni97UfbVhzpCLgGNs3/jp//r9en72l3UnWCsCvSjrpmQA54Nkq28q3FQn5mucL5OuIsUPi6f5Si6KJVasaT26/+T1vVgS8hfbl3+27br65dE+r1VknrH6IIfkjolmh/A8hwYJv1lySInYXhXc8W49r235++yVPVQQE2lee3n/d3OzS5tZS+2oWurlMYgj4qI45HsApCZYAzgFSsU7FfhSx5+txtHnnHR9+qiJAtPsnD1y4MLf09aXF9q3ZzTAmb8ohQaKtHEDLkGv9qIBHBbgmICXrlKyjKHoirkWbfn3X+EsnLQH37dr3mebs4o8x4WMSfGbAtyQoDSLga8BBeQJqqXE8QQIt12D2syVR2+LcxVpcu/23d1/++ElFwPdeXB555eWZ7x453LozEvuRtvqMgCyAhm7QyI7Sf7BEcI4mCNP9lBKhrD9R2zkJqVwLWfpOYyje8tSXLlsceAIy8Pf8Y98vOq1kXQZ8JG5By06kLZ/5+m8DcEiCMt3XXpCTwN0YkKrYkFu/0CfpCSC3M1IEEeILJ0ca9et/c/fliwNLQAb+SxR8JTkRJUAdy4DOtxCdQGyDb6ALyq0HuEHYyo+OAYlaUkVI7hkAk8ON+Ppd935kceAIMOC3BfjiayUB1gO0FAG4QTjfN9pTjAO0/0+9ICeDBmGyJDQoEzIyjxCr3YKET/7+vitWhIRopQh4+Z/7fyAtX4PPQG/X1JLt19Rit8XfIjDb7uftdWqa0Ox8sNfIF/CvK4mPFAB6yf4uaPzYYit5dKVwWREC7vrV619sLXU+L8Gg4AEBlXgFBdN8JgQ6M+fUHJDJNWiQD1yHkXOlDLLMcz53xTf/dNtASNBXn5l598y++b+L3k2j5gFdU1KkA2/kdEOR9IDsOpSCQBN4M/3nJuCaOEC03pEhJTuJJ0XtbEHsxBG75PmvXb2nrz3gzZnD2wVCjQgYsUAgVm2lRwdiR6LA9w4lS2YNnlcRq/avB653MfrdnueJf/XllP+kryXozl++toF30o9GnoVrvZU9HtINNaD44BFgHSmx4DP/swRkKz9F3WcGCOt9NQ0MwtWXbnn2xr4loL2cbNYDLAdcQgTz9FpbtAaUERANsJEFklHvIOMISiBzrqlJD4w/yKJJEL2lh/qSgDt2vrohTdJL/QdzBlvMtUQ6Io6cXgwBngZWD/xIeZVeM49o5g30KEGh+1Td4g+8f/Mfbuw7Atotbf0EVOchmQuGts5APsh4UNCbSBoDPE/oYt3B48zekzEIVF7Ae+cFPSHgCxOvXpwm/FLHslQ6GZgLfNfFk54iuMwhyAf2aJLTfcHg8cwLLnzwmQ/2DQFCem4pgOR4gzviLQDD3gIRzE1h+PshyaEB3wPYORccImVyUHR1P9s3BCSd9CZ/sOFPsjCV4ymkHViYCJOmYEV5co8xF1DnHrDwfe6cg1wzdZCRE8WY4tN9QYAY9Z4rBj7vzZNpiMWRHqJ5yKMCQcBg3cgJkpFPzjtkRw7x+ruwqxQBkUxNnGgXXbBp8rzSE3BksT3uZDDJUJahD7hKsGFACug+Cx8vegQWRtEsOOzHwO1h8WEQncOiSzpeegLaneQqB2wMEKHWGHhgigUzs2FYsHyAcNzonmfBgneZBfDoORk03nlF6QkQffCL3WdFM4uFSLUWHcuzTqPrTtCdkvSMlwb5ozZVPQHo3g8L+qgjOZ7t5HvHvScUH/ceUMrPrmkTzW+agbRrZswIkXmGKHUbPWCQWDH6shUK8n4g9aVQGQFT3+kYguGdGAoWHOas0nsAqowlnbUyD+BPpoNX56M/R7wAzHkItjLCDaQO1gZXBOcQ3Tego4dzYAsBMBw5yukB8o6ZARiVZSs7l8e8NSOKr60euwRM39J1bZBZG/KsRTuxhRoE+Q6T3gZqNPk0pWUXsfwEaA/IHyQHX1sZM/CiY9OWLH0GJ66ZSwYJuoYgDxMj9QiFeiFbzIWGA3oOBKrvdIWF9j9Z8NUPHoBoQUVq7RJsbnpCTIFNgEeZUANFghmZogWeYVG6bAmirYqgJSwFItC3eNdwqIdwpN7SBx7A0YZdRCo1ECQiMlYvk3WcDJAi7QGBYAzepDyqSXlTpKXWEPAI7sQk7RmWJFPeqGXInNcHHtBKUhiJa/KBsoq0CBSwQJIENg2RP7AqyNIdpYhYnfYI9KYo/SlJWxdKSSHgBTyAQzfZcqc9OfGG0veC4ijaTcsEgdZskqo1W7/jVrbpEhL6N7kPzpwu99dqHljPDaNXqGuvby0cQyQpY+EK8BSsV4i2u/QecLjdmR4biq2Gc5WCZmgSXJwzKeYkqqKyfDqyjZT12VEtFrqZ6FTC0bJ0S7AGMD8PSB2R8RjrOUhK3FPQniIlSpAxXXoCFlrJ1Kp6rLScmVElU2gzf5TFbJIMzawYM8GWBXI4tEoaObrvB5A6UBMLPI8qWj6aLmdeyqjApx6QSqKmjjdePSlL+dQPX5gdjqPVea2OKqqyBVFuXZCukgOg9TvZMVuamGc3/RJFJAMsXqyOowW53CtF0UW6VtK4qpgGaIvtlrhOS6yz8hS5na2xefDha0/vg4FYrsUT4v5vy1PD3JUfSjnKMZvt66PNxWA+k8VtahjdHhCQFzVoebpTC0S2uV+2SOOCkSFl/WitPy9hlN4x0TcTMjOLrZ1GRwMFsno77yVxWjzlWqUTfNH/u1to637GBT8/jxbr6uCOhBR1TqplCKTscNDygzt7gVXPKuM+8ejUbKOWyRCt61TlHqYQihVKSY7+loy0eqZAgkCRrl8d7b8Zk5rXlVSlnC5bF8dasiJOV8Zp6RHb0DzUA/npmQfkwXi5s80CwF2rRbdc0PEQ9eJEwdqJ9yQFj+HBc5MA+Nx5m9LuJ0R+8hhhJCgnd1uvcOoZAS/sn9sqBmVNCnoIzAJwafdj+uUKnvKuwCeBgCutHozM+DEhUYDbNSgScu1vit2tfUfA/CPrmgvtRHqBM4gCByBOQKTAUPAl8FwAb4tpE97dS5JuZFNZUtadKsATQ4RYAIhHwLameJa+IyBrfzswv3WxI7yASIEPjvumilpS1xMK3UjHe3xC1DXQexmDyhEJwh2OLviuJwjgcWsvMep5efr4t55b//ZThnbk4wBw+/6F2n3wS0v8EnUyIYM0tex1Q8l26qUkUjViThT4HZSLDLyQB9+ODLzZuTcI65/oawKyduW3/7zj9OH6+ihEgDcA82t3GEBxMkQBqntFvPC+MDjdS5qm0IGWgt/RwKveT7Yvjk8I8G/oNTbxShDw4qHDGz901tia0bg2jirvo9fcIQC7lJ+gk6RHPdEPUMhcOj9ZAMW353O5kgAroNU2pxKUpxw2rgQ2K/aS3jsf3D1+/qrhp0fE2MCkIzwJ6lYy6Fu/JcCf8QqMB8Baf6JAT8i2lp9EyY441hSfuWbukXVTA0VATsIDu8fPO6Xx9HBGAi1NJ2XkAMUKtcKcJ523JeBz5ycLbMo5VUDbtZYgua/BF72fpvjMNc0VAn/FCcjaBQ9Mjr9jtJF7An1zhQZiO0VvM6VQSEdbDwCPACpDKenVpMbybW9Hb2ejXfHpFQX/hBCQk7Bpcvzs4aEdo3G0hr4fHEGoiNdNRdOUhEuElSOdTEvABt2EkNFx+v3CCwCmUfZ4plYaixP2Yx2n3r9r9QWnNHasrsdrQ68KwX8hAOiEjJ421H18oGkFRQSXg6+O7gUZr4BJcYUbejnYKiUBuq3ZNHnPGY36Q42IrWYBL2B+4Szt/2vZAfJDTd6kigZeSw7J82R6v2X2kXVbT+Tzl+IHm4Q3rDlnuP6Q6KbeVjfxAPWbis7NojcdqSfWubJ8riUIrAfo1ENiPAS2i2NbRE9n+kQ/e6l+suw0QcSZwhtEL8kQQUvbTb0aLec0lk9Gv2AHYwkhQvzbLpYthx6+drosz1zKH+3LPGIsrq2PDu79/vA5F8p3igMEAFjZQV+CrCxNicM/FdY/UQaL7wsCdBsZGcH6GefB2EVXwsi574Pa294FSeM0aNVPpeMyFYQRhtrzEC0dgnTmFWi9sQcW9k5NLr2x5+PQm7ra/klF/F8WcngG4r1/hLG5F+HsQ+fA6OgoDDUaEMdx9rtv0vLTFNqdDrSWl2FhYQEOHNgPyeysCLOzzAsfpWsRDH5jZfb2gSQA3fQFKzMJJ4sHlJaEeFA9QHlB6G1YLFNMGGAPQPqMrKyeEMNgN5pg5V08AU6kNwwsASQQUy/nXdzkhElSPODgs4DMhkioPKA3RJgfvwoFCN6FkIqA4xwDQgTo90eSKgj3oHGe0q5oHBgVZwR0ql5Qz2MAhjwg+2O7Goj1VvtpD6jugb8AJUrMDagEcToSjgn48yCrzkvTBjYZx7khoK6e82AZgu5JQ4Bqi+oZ/10m3e8bAgSQE8dOQE7CJMh3e5fL+oylJiCKom3HSoBYDqdp+pjYLfX/Ta/UBBw5cmSSMbb1WAhgLNqYJMnLZZfL0seAZrN5rwDzfyGhWa/XNx48+OaT/RCv+iIIT0/vvbfRaFwjvOFoMSErLdxeq9Uu27v3le390mFg0IftpptvXetURXDefPyxn01B1apWtapVrWpVq1rVqla1t9L+I8AAvydNUrElRtkAAAAASUVORK5CYII= Url Encoded Base64 String iVBORw0KGgoAAAANSUhEUgAAAGAAAABgCAYAAADimHc4AAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAADjVJREFUeNrsXX2MHVUVP3fevH27SxfKhyBCsIqoKOhCBBISY5Hyj2JSGuXLP6SJwYAYICgxsUBr1AhqbE2I8Q9jjYE%2FCLFbP4IxVbaIYjAmS0wUC6ELRGy7tPt2t9vd997MPd6Z%2B3Xunfsq1r7tvNe56TAfO2%2FezO93zu%2Fce%2B6ZB0DVqla1qlWtalWrWtWqVrWqVa1qJ1Nj%2FXCTm5%2BbHeUcz9%2B3f25tmvLVSYePIyLIBRhX24yxKXH63Kljw5Ni97UfbVhzpCLgGNs3%2Fjp%2F%2Fr9en72l3UnWCsCvSjrpmQA54Nkq28q3FQn5mucL5OuIsUPi6f5Si6KJVasaT26%2F%2BT1vVgS8hfbl3%2B27br65dE%2Br1VknrH6IIfkjolmh%2FA8hwYJv1lySInYXhXc8W49r235%2B%2ByVPVQQE2lee3n%2Fd3OzS5tZS%2B2oWurlMYgj4qI45HsApCZYAzgFSsU7FfhSx5%2BtxtHnnHR9%2BqiJAtPsnD1y4MLf09aXF9q3ZzTAmb8ohQaKtHEDLkGv9qIBHBbgmICXrlKyjKHoirkWbfn3X%2BEsnLQH37dr3mebs4o8x4WMSfGbAtyQoDSLga8BBeQJqqXE8QQIt12D2syVR2%2BLcxVpcu%2F23d1%2F%2B%2BElFwPdeXB555eWZ7x453LozEvuRtvqMgCyAhm7QyI7Sf7BEcI4mCNP9lBKhrD9R2zkJqVwLWfpOYyje8tSXLlsceAIy8Pf8Y98vOq1kXQZ8JG5By06kLZ%2F5%2Bm8DcEiCMt3XXpCTwN0YkKrYkFu%2F0CfpCSC3M1IEEeILJ0ca9et%2Fc%2FfliwNLQAb%2BSxR8JTkRJUAdy4DOtxCdQGyDb6ALyq0HuEHYyo%2BOAYlaUkVI7hkAk8ON%2BPpd935kceAIMOC3BfjiayUB1gO0FAG4QTjfN9pTjAO0%2F0%2B9ICeDBmGyJDQoEzIyjxCr3YKET%2F7%2BvitWhIRopQh4%2BZ%2F7fyAtX4PPQG%2FX1JLt19Rit8XfIjDb7uftdWqa0Ox8sNfIF%2FCvK4mPFAB6yf4uaPzYYit5dKVwWREC7vrV619sLXU%2BL8Gg4AEBlXgFBdN8JgQ6M%2BfUHJDJNWiQD1yHkXOlDLLMcz53xTf%2FdNtASNBXn5l598y%2B%2Bb%2BL3k2j5gFdU1KkA2%2FkdEOR9IDsOpSCQBN4M%2F3nJuCaOEC03pEhJTuJJ0XtbEHsxBG75PmvXb2nrz3gzZnD2wVCjQgYsUAgVm2lRwdiR6LA9w4lS2YNnlcRq%2FavB653MfrdnueJf%2FXllP%2BkryXozl%2B%2BtoF30o9GnoVrvZU9HtINNaD44BFgHSmx4DP%2FswRkKz9F3WcGCOt9NQ0MwtWXbnn2xr4loL2cbNYDLAdcQgTz9FpbtAaUERANsJEFklHvIOMISiBzrqlJD4w%2FyKJJEL2lh%2FqSgDt2vrohTdJL%2FQdzBlvMtUQ6Io6cXgwBngZWD%2FxIeZVeM49o5g30KEGh%2B1Td4g%2B8f%2FMfbuw7Atotbf0EVOchmQuGts5APsh4UNCbSBoDPE%2FoYt3B48zekzEIVF7Ae%2BcFPSHgCxOvXpwm%2FFLHslQ6GZgLfNfFk54iuMwhyAf2aJLTfcHg8cwLLnzwmQ%2F2DQFCem4pgOR4gzviLQDD3gIRzE1h%2BPshyaEB3wPYORccImVyUHR1P9s3BCSd9CZ%2FsOFPsjCV4ymkHViYCJOmYEV5co8xF1DnHrDwfe6cg1wzdZCRE8WY4tN9QYAY9Z4rBj7vzZNpiMWRHqJ5yKMCQcBg3cgJkpFPzjtkRw7x%2BruwqxQBkUxNnGgXXbBp8rzSE3BksT3uZDDJUJahD7hKsGFACug%2BCx8vegQWRtEsOOzHwO1h8WEQncOiSzpeegLaneQqB2wMEKHWGHhgigUzs2FYsHyAcNzonmfBgneZBfDoORk03nlF6QkQffCL3WdFM4uFSLUWHcuzTqPrTtCdkvSMlwb5ozZVPQHo3g8L%2BqgjOZ7t5HvHvScUH%2FceUMrPrmkTzW%2BagbRrZswIkXmGKHUbPWCQWDH6shUK8n4g9aVQGQFT3%2BkYguGdGAoWHOas0nsAqowlnbUyD%2BBPpoNX56M%2FR7wAzHkItjLCDaQO1gZXBOcQ3Tego4dzYAsBMBw5yukB8o6ZARiVZSs7l8e8NSOKr60euwRM39J1bZBZG%2FKsRTuxhRoE%2BQ6T3gZqNPk0pWUXsfwEaA%2FIHyQHX1sZM%2FCiY9OWLH0GJ66ZSwYJuoYgDxMj9QiFeiFbzIWGA3oOBKrvdIWF9j9Z8NUPHoBoQUVq7RJsbnpCTIFNgEeZUANFghmZogWeYVG6bAmirYqgJSwFItC3eNdwqIdwpN7SBx7A0YZdRCo1ECQiMlYvk3WcDJAi7QGBYAzepDyqSXlTpKXWEPAI7sQk7RmWJFPeqGXInNcHHtBKUhiJa%2FKBsoq0CBSwQJIENg2RP7AqyNIdpYhYnfYI9KYo%2FSlJWxdKSSHgBTyAQzfZcqc9OfGG0veC4ijaTcsEgdZskqo1W7%2FjVrbpEhL6N7kPzpwu99dqHljPDaNXqGuvby0cQyQpY%2BEK8BSsV4i2u%2FQecLjdmR4biq2Gc5WCZmgSXJwzKeYkqqKyfDqyjZT12VEtFrqZ6FTC0bJ0S7AGMD8PSB2R8RjrOUhK3FPQniIlSpAxXXoCFlrJ1Kp6rLScmVElU2gzf5TFbJIMzawYM8GWBXI4tEoaObrvB5A6UBMLPI8qWj6aLmdeyqjApx6QSqKmjjdePSlL%2BdQPX5gdjqPVea2OKqqyBVFuXZCukgOg9TvZMVuamGc3%2FRJFJAMsXqyOowW53CtF0UW6VtK4qpgGaIvtlrhOS6yz8hS5na2xefDha0%2Fvg4FYrsUT4v5vy1PD3JUfSjnKMZvt66PNxWA%2Bk8VtahjdHhCQFzVoebpTC0S2uV%2B2SOOCkSFl%2FWitPy9hlN4x0TcTMjOLrZ1GRwMFsno77yVxWjzlWqUTfNH%2Fu1to637GBT8%2Fjxbr6uCOhBR1TqplCKTscNDygzt7gVXPKuM%2B8ejUbKOWyRCt61TlHqYQihVKSY7%2Bloy0eqZAgkCRrl8d7b8Zk5rXlVSlnC5bF8dasiJOV8Zp6RHb0DzUA%2FnpmQfkwXi5s80CwF2rRbdc0PEQ9eJEwdqJ9yQFj%2BHBc5MA%2BNx5m9LuJ0R%2B8hhhJCgnd1uvcOoZAS%2Fsn9sqBmVNCnoIzAJwafdj%2BuUKnvKuwCeBgCutHozM%2BDEhUYDbNSgScu1vit2tfUfA%2FCPrmgvtRHqBM4gCByBOQKTAUPAl8FwAb4tpE97dS5JuZFNZUtadKsATQ4RYAIhHwLameJa%2BIyBrfzswv3WxI7yASIEPjvumilpS1xMK3UjHe3xC1DXQexmDyhEJwh2OLviuJwjgcWsvMep5efr4t55b%2F%2FZThnbk4wBw%2B%2F6F2n3wS0v8EnUyIYM0tex1Q8l26qUkUjViThT4HZSLDLyQB9%2BODLzZuTcI65%2FoawKyduW3%2F7zj9OH6%2BihEgDcA82t3GEBxMkQBqntFvPC%2BMDjdS5qm0IGWgt%2FRwKveT7Yvjk8I8G%2FoNTbxShDw4qHDGz901tia0bg2jirvo9fcIQC7lJ%2Bgk6RHPdEPUMhcOj9ZAMW353O5kgAroNU2pxKUpxw2rgQ2K%2FaS3jsf3D1%2B%2Fqrhp0fE2MCkIzwJ6lYy6Fu%2FJcCf8QqMB8Baf6JAT8i2lp9EyY441hSfuWbukXVTA0VATsIDu8fPO6Xx9HBGAi1NJ2XkAMUKtcKcJ523JeBz5ycLbMo5VUDbtZYgua%2FBF72fpvjMNc0VAn%2FFCcjaBQ9Mjr9jtJF7An1zhQZiO0VvM6VQSEdbDwCPACpDKenVpMbybW9Hb2ejXfHpFQX%2FhBCQk7Bpcvzs4aEdo3G0hr4fHEGoiNdNRdOUhEuElSOdTEvABt2EkNFx%2Bv3CCwCmUfZ4plYaixP2Yx2n3r9r9QWnNHasrsdrQ68KwX8hAOiEjJ421H18oGkFRQSXg6%2BO7gUZr4BJcYUbejnYKiUBuq3ZNHnPGY36Q42IrWYBL2B%2B4Szt%2F2vZAfJDTd6kigZeSw7J82R6v2X2kXVbT%2BTzl%2BIHm4Q3rDlnuP6Q6KbeVjfxAPWbis7NojcdqSfWubJ8riUIrAfo1ENiPAS2i2NbRE9n%2BkQ%2Fe6l%2Bsuw0QcSZwhtEL8kQQUvbTb0aLec0lk9Gv2AHYwkhQvzbLpYthx6%2Bdrosz1zKH%2B3LPGIsrq2PDu79%2FvA5F8p3igMEAFjZQV%2BCrCxNicM%2FFdY%2FUQaL7wsCdBsZGcH6GefB2EVXwsi574Pa294FSeM0aNVPpeMyFYQRhtrzEC0dgnTmFWi9sQcW9k5NLr2x5%2BPQm7ra%2FklF%2FF8WcngG4r1%2FhLG5F%2BHsQ%2BfA6OgoDDUaEMdx9rtv0vLTFNqdDrSWl2FhYQEOHNgPyeysCLOzzAsfpWsRDH5jZfb2gSQA3fQFKzMJJ4sHlJaEeFA9QHlB6G1YLFNMGGAPQPqMrKyeEMNgN5pg5V08AU6kNwwsASQQUy%2FnXdzkhElSPODgs4DMhkioPKA3RJgfvwoFCN6FkIqA4xwDQgTo90eSKgj3oHGe0q5oHBgVZwR0ql5Qz2MAhjwg%2B2O7Goj1VvtpD6jugb8AJUrMDagEcToSjgn48yCrzkvTBjYZx7khoK6e82AZgu5JQ4Bqi%2BoZ%2F10m3e8bAgSQE8dOQE7CJMh3e5fL%2BoylJiCKom3HSoBYDqdp%2BpjYLfX%2FTa%2FUBBw5cmSSMbb1WAhgLNqYJMnLZZfL0seAZrN5rwDzfyGhWa%2FXNx48%2BOaT%2FRCv%2BiIIT0%2FvvbfRaFwjvOFoMSErLdxeq9Uu27v3le390mFg0IftpptvXetURXDefPyxn01B1apWtapVrWpVq1rVqla1t9L%2BI8AAvydNUrElRtkAAAAASUVORK5CYII%3D

    Read the article

  • Problem using easy_install on Windows 7, 64 bit. (cannot find python.exe)

    - by Rune
    Hi, I have just now installed Python 2.6 on my Windows 7 (64 bit) Lenovo t61p laptop. I have downloaded Sphinx and nose and apparently installed them correctly using python setup.py install (at least no errors were reported during the installation). Now I am trying to install pymongo using easy_install but I am not having much success. It seems that easy_install isn't working at all. I execute easy_install as administrator: C:\>easy_install Cannot find Python executable C:\Program Files\Python26\python.exe The path C:\Program Files\Python26\python.exe is correct. I have found this bug report on bugs.python.org which seems to be related, although its status is 'Resolved'. Do you have any ideas as to what may be wrong? Any pointers, hints or tips for diagnosing the problem further would be greatly appreciated. EDIT: This is the stacktrace I receive when trying to install pymongo: C:\Users\Rune Ibsen\Documents\Downloads\pymongo-1.4>python setup.py install running install running bdist_egg running egg_info writing pymongo.egg-info\PKG-INFO writing top-level names to pymongo.egg-info\top_level.txt writing dependency_links to pymongo.egg-info\dependency_links.txt reading manifest file 'pymongo.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'pymongo.egg-info\SOURCES.txt' installing library code to build\bdist.win-amd64\egg running install_lib running build_py running build_ext building 'pymongo._cbson' extension Traceback (most recent call last): File "setup.py", line 166, in <module> "doc": doc}) File "C:\Program Files\Python26\lib\distutils\core.py", line 152, in setup dist.run_commands() File "C:\Program Files\Python26\lib\distutils\dist.py", line 975, in run_commands self.run_command(cmd) File "C:\Program Files\Python26\lib\distutils\dist.py", line 995, in run_command cmd_obj.run() File "C:\Program Files\Python26\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\command\install.py", line 76, in run File "C:\Program Files\Python26\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\command\install.py", line 96, in do_egg_install File "C:\Program Files\Python26\lib\distutils\cmd.py", line 333, in run_command self.distribution.run_command(command) File "C:\Program Files\Python26\lib\distutils\dist.py", line 995, in run_command cmd_obj.run() File "C:\Program Files\Python26\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\command\bdist_egg.py", line 174, in run File "C:\Program Files\Python26\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\command\bdist_egg.py", line 161, in call_command File "C:\Program Files\Python26\lib\distutils\cmd.py", line 333, in run_command self.distribution.run_command(command) File "C:\Program Files\Python26\lib\distutils\dist.py", line 995, in run_command cmd_obj.run() File "C:\Program Files\Python26\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\command\install_lib.py", line 20, in run File "C:\Program Files\Python26\lib\distutils\command\install_lib.py", line 113, in build self.run_command('build_ext') File "C:\Program Files\Python26\lib\distutils\cmd.py", line 333, in run_command self.distribution.run_command(command) File "C:\Program Files\Python26\lib\distutils\dist.py", line 995, in run_command cmd_obj.run() File "setup.py", line 107, in run build_ext.run(self) File "C:\Program Files\Python26\lib\distutils\command\build_ext.py", line 340, in run self.build_extensions() File "C:\Program Files\Python26\lib\distutils\command\build_ext.py", line 449, in build_extensions self.build_extension(ext) File "setup.py", line 117, in build_extension build_ext.build_extension(self, ext) File "C:\Program Files\Python26\lib\distutils\command\build_ext.py", line 499, in build_extension depends=ext.depends) File "C:\Program Files\Python26\lib\distutils\msvc9compiler.py", line 448, in compile self.initialize() File "C:\Program Files\Python26\lib\distutils\msvc9compiler.py", line 358, in initialize vc_env = query_vcvarsall(VERSION, plat_spec) File "C:\Program Files\Python26\lib\distutils\msvc9compiler.py", line 274, in query_vcvarsall raise ValueError(str(list(result.keys()))) ValueError: [u'path'] C:\Users\Rune Ibsen\Documents\Downloads\pymongo-1.4> PS.: I previously installed Python 3.1 but later installed 2.6 because I am not sure whether pymongo supports 3.1. PPS.: I have tried installing pymongo using the python setup.py install approach, but this resulted in a nasty-looking stack trace, so I thought I would try to let easy_install take care of it for me. PPPS.: I am completely new to Python, easy_install, eggs etc.

    Read the article

  • Lightweight use of Enterprise Library TraceListeners

    - by gWiz
    Is it possible to use the Enterprise Library 4.1 TraceListeners without using the entire Enterprise Library Logging AB? I'd prefer to simply use .NET Diagnostics Tracing, but would like to setup a listener that sends emails on Error events. I figured I could use the Enterprise Library EmailTraceListener. However, my initial attempts to configure it have failed. Here's what I hoped would work: <system.diagnostics> <trace autoflush="false" /> <sources> <source name="SampleSource" switchValue="Verbose" > <listeners> <add name="textFileListener" /> <add name="emailListener" /> </listeners> </source> </sources> <sharedListeners> <add name="textFileListener" type="System.Diagnostics.TextWriterTraceListener" initializeData="..\trace.log" traceOutputOptions="DateTime"> <filter type="System.Diagnostics.EventTypeFilter" initializeData="Verbose" /> </add> <add name="emailListener" type="Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.EmailTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging" toAddress="[email protected]" fromAddress="[email protected]" smtpServer="mail.example.com" > <filter type="System.Diagnostics.EventTypeFilter" initializeData="Verbose" /> </add> </sharedListeners> </system.diagnostics> However I get [ArgumentException: The parameter 'address' cannot be an empty string. Parameter name: address] System.Net.Mail.MailAddress..ctor(String address, String displayName, Encoding displayNameEncoding) +1098157 System.Net.Mail.MailAddress..ctor(String address) +8 Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.EmailMessage.CreateMailMessage() +256 Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.EmailMessage.Send() +39 Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.EmailTraceListener.Write(String message) +96 System.Diagnostics.TraceListener.WriteHeader(String source, TraceEventType eventType, Int32 id) +184 System.Diagnostics.TraceListener.TraceEvent(TraceEventCache eventCache, String source, TraceEventType eventType, Int32 id, String format, Object[] args) +63 System.Diagnostics.TraceSource.TraceEvent(TraceEventType eventType, Int32 id, String format, Object[] args) +198 System.Diagnostics.TraceSource.TraceInformation(String message) +14 Which leads me to believe the .NET Tracing code does not care about the "non-standard" config attributes I've supplied for emailListener. I also tried adding the appropriate LAB configSection declaration and: <loggingConfiguration> <listeners> <add toAddress="[email protected]" fromAddress="[email protected]" smtpServer="mail.example.com" type="Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.EmailTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging" name="emailListener" /> </listeners> </loggingConfiguration> This also results in the same exception. I figure it's possible to programmatically configure the EmailTraceListener, but I prefer this to be config-driven. I also understand I can implement my own derivative of TraceListener. So, is it possible to use the Ent Lib TraceListeners, without using the whole Ent Lib LAB, and configure them from the config file? Update: After examining the code, I have discovered it is not possible. The Ent Lib TraceListeners do not actually utilize the config attributes they specify in overriding TraceListener.GetSupportedAttributes(), despite the recommendations in the .NET TraceListener documentation. Bug filed.

    Read the article

  • how to use ggplot conditional on data

    - by Andreas
    I asked this question and it seams ggplot2 currently has a bug with empty data.frames. Therefore I am trying to check if the dataframe is empty, before I make the plot. But what ever I come up with, it gets really ugly, and doesn't work. So I am asking for your help. example data: SOdata <- structure(list(id = 10:55, one = c(7L, 8L, 7L, NA, 7L, 8L, 5L, 7L, 7L, 8L, NA, 10L, 8L, NA, NA, NA, NA, 6L, 5L, 6L, 8L, 4L, 7L, 6L, 9L, 7L, 5L, 6L, 7L, 6L, 5L, 8L, 8L, 7L, 7L, 6L, 6L, 8L, 6L, 8L, 8L, 7L, 7L, 5L, 5L, 8L), two = c(7L, NA, 8L, NA, 10L, 10L, 8L, 9L, 4L, 10L, NA, 10L, 9L, NA, NA, NA, NA, 7L, 8L, 9L, 10L, 9L, 8L, 8L, 8L, 8L, 8L, 9L, 10L, 8L, 8L, 8L, 10L, 9L, 10L, 8L, 9L, 10L, 8L, 8L, 7L, 10L, 8L, 9L, 7L, 9L), three = c(7L, 10L, 7L, NA, 10L, 10L, NA, 10L, NA, NA, NA, NA, 10L, NA, NA, 4L, NA, 7L, 7L, 4L, 10L, 10L, 7L, 4L, 7L, NA, 10L, 4L, 7L, 7L, 7L, 10L, 10L, 7L, 10L, 4L, 10L, 10L, 10L, 4L, 10L, 10L, 10L, 10L, 7L, 10L), four = c(7L, 10L, 4L, NA, 10L, 7L, NA, 7L, NA, NA, NA, NA, 10L, NA, NA, 4L, NA, 10L, 10L, 7L, 10L, 10L, 7L, 7L, 7L, NA, 10L, 7L, 4L, 10L, 4L, 7L, 10L, 2L, 10L, 4L, 12L, 4L, 7L, 10L, 10L, 12L, 12L, 4L, 7L, 10L), five = c(7L, NA, 6L, NA, 8L, 8L, 7L, NA, 9L, NA, NA, NA, 9L, NA, NA, NA, NA, 7L, 8L, NA, NA, 7L, 7L, 4L, NA, NA, NA, NA, 5L, 6L, 5L, 7L, 7L, 6L, 9L, NA, 10L, 7L, 8L, 5L, 7L, 10L, 7L, 4L, 5L, 10L), six = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L), .Label = c("2010-05-25", "2010-05-27", "2010-06-07"), class = "factor"), seven = c(0.777777777777778, 0.833333333333333, 0.333333333333333, 0.888888888888889, 0.5, 0.888888888888889, 0.777777777777778, 0.722222222222222, 0.277777777777778, 0.611111111111111, 0.722222222222222, 1, 0.888888888888889, 0.722222222222222, 0.555555555555556, NA, 0, 0.666666666666667, 0.666666666666667, 0.833333333333333, 0.833333333333333, 0.833333333333333, 0.833333333333333, 0.722222222222222, 0.833333333333333, 0.888888888888889, 0.666666666666667, 1, 0.777777777777778, 0.722222222222222, 0.5, 0.833333333333333, 0.722222222222222, 0.388888888888889, 0.722222222222222, 1, 0.611111111111111, 0.777777777777778, 0.722222222222222, 0.944444444444444, 0.555555555555556, 0.666666666666667, 0.722222222222222, 0.444444444444444, 0.333333333333333, 0.777777777777778), eight = c(0.666666666666667, 0.333333333333333, 0.833333333333333, 0.666666666666667, 1, 1, 0.833333333333333, 0.166666666666667, 0.833333333333333, 0.833333333333333, 1, 1, 0.666666666666667, 0.666666666666667, 0.333333333333333, 0.5, 0, 0.666666666666667, 0.5, 1, 0.666666666666667, 0.5, 0.666666666666667, 0.666666666666667, 0.666666666666667, 0.333333333333333, 0.333333333333333, 1, 0.666666666666667, 0.833333333333333, 0.666666666666667, 0.666666666666667, 0.5, 0, 0.833333333333333, 1, 0.666666666666667, 0.5, 0.666666666666667, 0.666666666666667, 0.5, 1, 0.833333333333333, 0.666666666666667, 0.833333333333333, 0.666666666666667), nine = c(0.307692307692308, NA, 0.461538461538462, 0.538461538461538, 1, 0.769230769230769, 0.538461538461538, 0.692307692307692, 0, 0.153846153846154, 0.769230769230769, NA, 0.461538461538462, NA, NA, NA, NA, 0, 0.615384615384615, 0.615384615384615, 0.769230769230769, 0.384615384615385, 0.846153846153846, 0.923076923076923, 0.615384615384615, 0.692307692307692, 0.0769230769230769, 0.846153846153846, 0.384615384615385, 0.384615384615385, 0.461538461538462, 0.384615384615385, 0.461538461538462, NA, 0.923076923076923, 0.692307692307692, 0.615384615384615, 0.615384615384615, 0.769230769230769, 0.0769230769230769, 0.230769230769231, 0.692307692307692, 0.769230769230769, 0.230769230769231, 0.769230769230769, 0.615384615384615), ten = c(0.875, 0.625, 0.375, 0.75, 0.75, 0.75, 0.625, 0.875, 1, 0.125, 1, NA, 0.625, 0.75, 0.75, 0.375, NA, 0.625, 0.5, 0.75, 0.875, 0.625, 0.875, 0.75, 0.625, 0.875, 0.5, 0.75, 0, 0.5, 0.875, 1, 0.75, 0.125, 0.5, 0.5, 0.5, 0.625, 0.375, 0.625, 0.625, 0.75, 0.875, 0.375, 0, 0.875), elleven = c(1, 0.8, 0.7, 0.9, 0, 1, 0.9, 0.5, 0, 0.8, 0.8, NA, 0.8, NA, NA, 0.8, NA, 0.4, 0.8, 0.5, 1, 0.4, 0.5, 0.9, 0.8, 1, 0.8, 0.5, 0.3, 0.9, 0.2, 1, 0.8, 0.1, 1, 0.8, 0.5, 0.2, 0.7, 0.8, 1, 0.9, 0.6, 0.8, 0.2, 1), twelve = c(0.666666666666667, NA, 0.133333333333333, 1, 1, 0.8, 0.4, 0.733333333333333, NA, 0.933333333333333, NA, NA, 0.6, 0.533333333333333, NA, 0.533333333333333, NA, 0, 0.6, 0.533333333333333, 0.733333333333333, 0.6, 0.733333333333333, 0.666666666666667, 0.533333333333333, 0.733333333333333, 0.466666666666667, 0.733333333333333, 1, 0.733333333333333, 0.666666666666667, 0.533333333333333, NA, 0.533333333333333, 0.6, 0.866666666666667, 0.466666666666667, 0.533333333333333, 0.333333333333333, 0.6, 0.6, 0.866666666666667, 0.666666666666667, 0.6, 0.6, 0.533333333333333)), .Names = c("id", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten", "elleven", "twelve"), class = "data.frame", row.names = c(NA, -46L)) And the plot iqr <- function(x, ...) { qs <- quantile(as.numeric(x), c(0.25, 0.5, 0.75), na.rm = T) names(qs) <- c("ymin", "y", "ymax") qs } magic <- function(y, ...) { high <- median(SOdata[[y]], na.rm=T)+1.5*sd(SOdata[[y]],na.rm=T) low <- median(SOdata[[y]], na.rm=T)-1.5*sd(SOdata[[y]],na.rm=T) ggplot(SOdata, aes_string(x="six", y=y))+ stat_summary(fun.data="iqr", geom="crossbar", fill="grey", alpha=0.3)+ geom_point(data = SOdata[SOdata[[y]] > high,], position=position_jitter(w=0.1, h=0),col="green", alpha=0.5)+ geom_point(data = SOdata[SOdata[[y]] < low,], position=position_jitter(w=0.1, h=0),col="red", alpha=0.5)+ stat_summary(fun.y=median, geom="point",shape=18 ,size=4, col="orange") } for (i in names(SOdata)[-c(1,7)]) { p<- magic(i) ggsave(paste("magig_plot_",i,".png",sep=""), plot=p, height=3.5, width=5.5) } The problem is that sometimes in the call to geom_point the subset returns an empty dataframe, which sometimes (!) causes ggplot2 to plot all the data instead of none of the data. geom_point(data = SOdata[SOdata[[y]] > high,], position=position_jitter(w=0.1, h=0),col="green", alpha=0.5)+ This is kindda of important to me, and I am really stuck trying to find a solution. Any help that will get me started is much appreciated. Thanks in advance.

    Read the article

  • What can I do to improve a project if there is a no-listening situation. Developers vs Management

    - by NazGul
    Hi all, I hope that I'm not the only one and I can get a answer from someone with more experience than me, so I can think cleaner and I don't get depressed with this developer's life. I'm working as developer for a small company three years now. In that three years I'm working in the same project and sincerely, I think this project could be used as a CASE STUDY because it has all the situations that cannot happen in a project and that makes a project fails. To begin with, and I believe you've already noticed, the project has 3 years already (develoment only) and is still unfinished, because in every meeting there is a "new priority" ,or a "new problem" to be solve or a "new feature" to be add. So, first problem is no target set. How can you know when something is finished if you don't know what you want? I understand Management, because they see an oportunity and try to get that, but I don't understand how can they not see (or hear us) that they'll lose all they already have and what they'll eventually get. Second, there is no team group. My team consists of three people, a Senior Developer, a DBA and, finally, I for all the work (support, testing, new features, bug fixing, meeting, projet management of clients, etc) aka Junior Developer. The first (senior developer), does not perform any tests on his changes, so, most of the time, his changes give us problems (us = me, since I'm the one who will fix it). The second (DBA) is an uncompromising person and you can not talk to him, believe me, I tried! In his view, everything he does is fantastic... even if it is the most complicated to make it... And he does everything he wants, even if we need that only for 5 months later and would help some extra-hand to do the things we have to do for now. As you can see, there is very hard to work with no help... Third, there is no testings. Every... I repeat, Every release of the project, the customers wants to kill us, because there is a lot of bugs. Management? They say that they want tests before the release. Us? We say the same. Time? No time. Management? There is always some time to open the application and click in some buttons. Us? Try to explain that it is not so simple. Management doesn't care... end of story. Actually, must of the bugs could be avoid with a rigorous work... Some people just want to do the show to the Management. "Did you ask for this? Cool, it's done. Bugs? The Do-all-the-work guy will solve." Unfortunally for me, sometimes the Do-all-the-work also has to finish it. And to makes this all better, I'm the person who will listen the complaints from the customers. Cool, huh? I know, everyone makes mistakes. But there is mistakes and mistakes... To complete, in the Management view, "the problem is the lack of an individual project management", because we cannot do all the stuff they ask, even if there is no PM for the project itself. And ask us to work overtime without any reward... I do say all this stuff to the management and others members, but by telling this, the I'm the bad guy, the guy who is complain when everything is going well... but we need to work overtime... sigh What can I do to make it works? Anyone has a situation like this, what did you do? I hope you could understand my problem, my English is a little rusty. Thanks.

    Read the article

  • Can't update scala on Gentoo

    - by xhochy
    As I wanted to test Scala 2.9.2 on my gentoo system I tried updated the package but ended up with this error. I can't figure out where the problem may be: Calculating dependencies ...... done! >>> Verifying ebuild manifests >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> Emerging (1 of 1) dev-lang/scala-2.9.2 >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> Failed to emerge dev-lang/scala-2.9.2, Log file: >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> '/var/tmp/portage/dev-lang/scala-2.9.2/temp/build.log' >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> Jobs: 0 of 1 complete, 1 running, 1 failed Load avg: 0.23, 0.16, 0.20 >>> Jobs: 0 of 1 complete, 1 failed Load avg: 0.23, 0.16, 0.20 * Package: dev-lang/scala-2.9.2 * Repository: gentoo * Maintainer: [email protected] * USE: amd64 elibc_glibc kernel_linux multilib userland_GNU * FEATURES: sandbox [01m[31;06m!!! ERROR: Couldn't find suitable VM. Possible invalid dependency string. Due to jdk-with-com-sun requiring a target of 1.7 but the virtual machines constrained by virtual/jdk-1.6 and/or this package requiring virtual(s) jdk-with-com-sun[0m * Unable to determine VM for building from dependencies: NV_DEPEND: virtual/jdk:1.6 java-virtuals/jdk-with-com-sun !binary? ( dev-java/ant-contrib:0 ) app-arch/xz-utils >=dev-java/java-config-2.1.9-r1 source? ( app-arch/zip ) >=dev-java/ant-core-1.7.0 dev-java/ant-nodeps >=dev-java/javatoolkit-0.3.0-r2 >=dev-lang/python-2.4 * ERROR: dev-lang/scala-2.9.2 failed (setup phase): * Failed to determine VM for building. * * Call stack: * ebuild.sh, line 93: Called pkg_setup * scala-2.9.2.ebuild, line 43: Called java-pkg-2_pkg_setup * java-pkg-2.eclass, line 53: Called java-pkg_init * java-utils-2.eclass, line 2187: Called java-pkg_switch-vm * java-utils-2.eclass, line 2674: Called die * The specific snippet of code: * die "Failed to determine VM for building." * * If you need support, post the output of `emerge --info '=dev-lang/scala-2.9.2'`, * the complete build log and the output of `emerge -pqv '=dev-lang/scala-2.9.2'`. !!! When you file a bug report, please include the following information: GENTOO_VM= CLASSPATH="" JAVA_HOME="" JAVACFLAGS="" COMPILER="" and of course, the output of emerge --info * The complete build log is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/build.log'. * The ebuild environment file is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/die.env'. * Working directory: '/var/tmp/portage/dev-lang/scala-2.9.2' * S: '/var/tmp/portage/dev-lang/scala-2.9.2/work/scala-2.9.2-sources' * Messages for package dev-lang/scala-2.9.2: * Unable to determine VM for building from dependencies: * ERROR: dev-lang/scala-2.9.2 failed (setup phase): * Failed to determine VM for building. * * Call stack: * ebuild.sh, line 93: Called pkg_setup * scala-2.9.2.ebuild, line 43: Called java-pkg-2_pkg_setup * java-pkg-2.eclass, line 53: Called java-pkg_init * java-utils-2.eclass, line 2187: Called java-pkg_switch-vm * java-utils-2.eclass, line 2674: Called die * The specific snippet of code: * die "Failed to determine VM for building." * * If you need support, post the output of `emerge --info '=dev-lang/scala-2.9.2'`, * the complete build log and the output of `emerge -pqv '=dev-lang/scala-2.9.2'`. * The complete build log is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/build.log'. * The ebuild environment file is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/die.env'. * Working directory: '/var/tmp/portage/dev-lang/scala-2.9.2' * S: '/var/tmp/portage/dev-lang/scala-2.9.2/work/scala-2.9.2-sources' The following eix output may help: % eix java-virtuals/jdk-with-com-sun [I] java-virtuals/jdk-with-com-sun Available versions: 20111111 {{ELIBC="FreeBSD"}} Installed versions: 20111111(16:08:51 18/04/12)(ELIBC="-FreeBSD") Homepage: http://www.gentoo.org Description: Virtual ebuilds that require internal com.sun classes from a JDK Both virtual jdks 1.6 and 1.7 are installed: % eix virtual/jdk [I] virtual/jdk Available versions: (1.4) ~1.4.2-r1[1] (1.5) 1.5.0 ~1.5.0-r3[1] (1.6) 1.6.0 1.6.0-r1 (1.7) (~)1.7.0 Installed versions: 1.6.0-r1(1.6)(23:22:48 10/11/12) 1.7.0(1.7)(23:21:09 10/11/12) Description: Virtual for JDK [1] "java-overlay" /var/lib/layman/java-overlay

    Read the article

  • Web browsing is fast, but downloads are slow

    - by Ricket
    I work for a company on my university's campus, helping with general IT problems and some web development. But lately there has been a problem that has me and my boss completely stumped. We, plus one contractor, make up the entire IT department, so I'm reaching out to you for help. All around the office, we have wall jacks. These collect in a closet down the hall and all plug into a switch. This switch, along with our individual server jacks, plugs into another switch, and that switch plugs into our firewall hardware. Then the firewall is connected out to our campus network. Our campus internet is, well, very fast. I don't know exactly the terms, tiers, etc., but we have thousands of students and downloads can run as fast as 10 MB/s at night; uploads are sometimes even faster. I think we're practically ISP level. In short, I have a lot of faith that it is not the campus side of things that is causing a problem, combined with other evidence I'll mention in a moment. So our symptoms: web browsing is fast. Web pages, images, etc. load instantly. No problems there. But then when I go to download something, the download starts fast but very quickly (a matter of seconds) drops to nearly 0. Often it will actually drop to 0 and time out. This happens with even very small files, 1 MB or less. It smells to me like a QoS sort of thing. I'm not entirely sure, and I wanted to get your opinions first. My boss is hesitant to touch our firewall, much less let me touch it, and it was set up and is managed by a consultant remotely. These problems don't seem tied to a time of the day. I've tried downloads after 5:00 and still the same thing happens. From my desk, I can turn on my wireless adapter and pick up the campus wireless access point. If I unplug ethernet and connect to it, downloads are fast. This adds to my suspicion that it's limited to our company network. Also, a number of weeks ago the consultant upgraded our firewall firmware. Suddenly everything was very fast. I tested with downloads from Sun and speedtest.net and things were blazing fast, as they should be with our campus internet! It was wonderful, and I figured the slow speeds were an old firmware bug. In a matter of days, things steadily declined until they were back to the old symptoms. Oh, and we have antivirus installed on every computer, and we keep it up to date. Though I suppose the possibility is still there that someone could have spyware which is bogging down our internet, in which case what is the easiest/best way to find this out? (maybe this should go in a separate question) Thank you for your patience in reading all of this. Do you have any ideas as to what I can try? Is this something that you've experienced before? What sort of tools or methods can I use to try and diagnose the problem? P.S. everything here is Windows. Windows Server 2003 and 2008 on our servers, and Windows XP on employees' machines. Update: We are submitting a ticket to the university to just take a look and see if they see anything unusual and/or can suggestion methods for us to try and pinpoint our problem. Hopefully they'll be helpful! I'll update this to let you know what goes on. Update again: We found a hub (yes, a HUB) right between our campus connection and our firewall. It had only those two ethernet cables plugged into it, nothing else. After removing the hub, our speeds have jumped up to several mbps. However in talking with the campus, we got them to run a gigabit line to our firewall in place of the 100mbps line. As of friday, we are at about 65 mbps up and down (according to speedtest.net at 8am)!! Go NC State!!

    Read the article

  • Parsing xml file that comes in as one object per line

    - by Casey
    I haven't been here in so long, I forgot my prior account! Anyways, I am working on parsing an xml document that comes in ugly. It is for banking statements. Each line is a <statement>all tags</statement>. Now, what I need to do is read this file in, and parse the XML document at the same time, while formatting it more human readable too. Point beeing, Original input looks like this: <statement><accountHeader><fiAddress></fiAddress><accountNumber></accountNumber><startDate>20140101</startDate><endDate>20140228</endDate><statementGroup>1</statementGroup><sortOption>0</sortOption><memberBranchCode>1</memberBranchCode><memberName></memberName><jointOwner1Name></jointOwner1Name><jointOwner2Name></jointOwner2Name></summary></statement> <statement><accountHeader><fiAddress></fiAddress><accountNumber></accountNumber><startDate>20140101</startDate><endDate>20140228</endDate><statementGroup>1</statementGroup><sortOption>0</sortOption><memberBranchCode>1</memberBranchCode><memberName></memberName><jointOwner1Name></jointOwner1Name><jointOwner2Name></jointOwner2Name></summary></statement> <statement><accountHeader><fiAddress></fiAddress><accountNumber></accountNumber><startDate>20140101</startDate><endDate>20140228</endDate><statementGroup>1</statementGroup><sortOption>0</sortOption><memberBranchCode>1</memberBranchCode><memberName></memberName><jointOwner1Name></jointOwner1Name><jointOwner2Name></jointOwner2Name></summary></statement> I need the final output to be as follows: <statement> <name></name> <address></address> </statement> This is fine and dandy. I am using the following "very slow considering 5.1 million lines, 254k data file, and about 60k statements takes around 8 minutes". foreach(String item in lines) { XElement xElement = XElement.Parse(item); sr.WriteLine(xElement.ToString().Trim()); } Then when the file is formatted this is what sucks. I need to check every single tag in transaction elements, and if a tag is missing that could be there, I have to fill it in. Our designer software will default prior values in if a tag is possible, and the current objects does not have. It defaults in the value of a prior one that was not Null. "I know, and they swear up and down it is not a bug... ok?" So, that is also taking about 5 to 10 minutes. I need to break all this down, and find a faster method for working with the initial XML. This is a preprocess action, and cannot take that long if not necessary. It just seems redundant. Is there a better way to parse the XML, or is this the best I can do? I parse the XML, write to a temp file, and then read that file in, to the output file inserting the missing tags. 2 IO runs for one process. Yuck.

    Read the article

  • Sometimes big delay when using PHP mail()

    - by Robbert Dam
    I have a website which processes orders from a Windows application. This works as follows: User clicks "Order now" in the windows app App uploads a file with POST to a PHP script The script immediately calls the PHP mail() function (order is not stored in a db) This works fine most of the time. However, sometimes a big delay occurs (several days). Customers calls why the product has not yet been delivered. E-mail headers of delayed mail follows: Microsoft Mail Internet Headers Version 2.0 Received: from barracuda.nkl.nl ([10.0.0.1]) by smtp.nkl.nl with Microsoft SMTPSVC(6.0.3790.3959); Wed, 26 May 2010 16:26:51 +0200 X-ASG-Debug-ID: 1274883818-2f8800000000-X58hIK X-Barracuda-URL: http://10.0.0.1:8000/cgi-bin/mark.cgi Received: from server45.firstfind.nl (localhost [127.0.0.1]) by barracuda.nkl.nl (Spam & Virus Firewall) with ESMTP id ECAFD15776A for <[email protected]>; Wed, 26 May 2010 16:23:38 +0200 (CEST) Received: from server45.firstfind.nl (server45.firstfind.nl [93.94.226.76]) by barracuda.nkl.nl with ESMTP id 85bAT2AU58kkxjPb for <[email protected]>; Wed, 26 May 2010 16:23:38 +0200 (CEST) X-Barracuda-Envelope-From: [email protected] Received: from server45.firstfind.nl (localhost [127.0.0.1]) by server45.firstfind.nl (8.13.8/8.13.8/Debian-3+etch1) with ESMTP id o4QEM3Hb004301 for <[email protected]>; Wed, 26 May 2010 16:23:31 +0200 Received: (from nklsemin@localhost) by server45.firstfind.nl (8.13.8/8.13.8/Submit) id o4J9lA7M031307; Wed, 19 May 2010 11:47:10 +0200 Date: Wed, 19 May 2010 11:47:10 +0200 Message-Id: <[email protected]> To: [email protected] X-ASG-Orig-Subj: easyfit - ref: Hoen3443 Subject: easyfit - ref: Hoen3443 X-PHP-Script: www.nklseminar.nl/emailer/upload.php for 77.61.220.217 From: [email protected] Content-type: text/html X-Virus-Scanned: by amavisd-new X-Barracuda-Connect: server45.firstfind.nl[93.94.226.76] X-Barracuda-Start-Time: 1274883820 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by Barracuda Spam & Virus Firewall at nkl.nl X-Barracuda-Spam-Score: 0.92 X-Barracuda-Spam-Status: No, SCORE=0.92 using global scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests=DATE_IN_PAST_96_XX, DATE_IN_PAST_96_XX_2, HTML_MESSAGE, MIME_HEADER_CTYPE_ONLY, MIME_HTML_ONLY, NO_REAL_NAME X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.2.30817 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.00 NO_REAL_NAME From: does not include a real name 0.01 DATE_IN_PAST_96_XX Date: is 96 hours or more before Received: date 0.00 MIME_HTML_ONLY BODY: Message only has text/html MIME parts 0.00 HTML_MESSAGE BODY: HTML included in message 0.86 MIME_HEADER_CTYPE_ONLY 'Content-Type' found without required MIME headers 2.07 DATE_IN_PAST_96_XX_2 DATE_IN_PAST_96_XX_2 Return-Path: [email protected] X-OriginalArrivalTime: 26 May 2010 14:26:51.0343 (UTC) FILETIME=[7B80DDF0:01CAFCDF] The delay seems to occur here: Received: from server45.firstfind.nl (localhost [127.0.0.1]) by server45.firstfind.nl (8.13.8/8.13.8/Debian-3+etch1) with ESMTP id o4QEM3Hb004301 for <[email protected]>; Wed, 26 May 2010 16:23:31 +0200 Received: (from nklsemin@localhost) by server45.firstfind.nl (8.13.8/8.13.8/Submit) id o4J9lA7M031307; Wed, 19 May 2010 11:47:10 +0200 I've reported this issue various times to the web hosting service that hosts my website. They say the delay does not occur in their network (impossible). But they do confirm that the e-mail is first seen in their mail server on May 26, which is 7 days after the mail has been composed. The order is marked with the timestamp of the user's local PC, which also matches May 19 (so it's not a PC clock problem) It's also interesting to see that all delayed mails (orders were placed on different days) come in at once. So I suddenly receive 14 e-mail in my mailbox from various days. Any idea were this delay may be introduced? Could there be a bug in my PHP code that causes this? (I cannot believe I can introduce a loop of 7 days in my PHP code)

    Read the article

  • WP7: play MP3 using Media with phonegap/Cordova

    - by Loda
    My problem: I use the Media Class from Cordova. The MP3 file is only played once (the first time). Code: Add this code to the Cordova Starter project to reproduce my problem: var playCounter = 0; function playMP3(){ console.log("playMP3() counter " + playCounter); var my_media = new Media("app/www/test.mp3");//ressource buildAction == content my_media.play(); playCounter++; } [...] <p onclick="playMP3();">Click to Play MP3</p> VS output: [...] GapBrowser_Navigated :: /app/www/index.html 'UI Task' (Managed): Loaded 'System.ServiceModel.Web.dll' 'UI Task' (Managed): Loaded 'System.ServiceModel.dll' Log:"onDeviceReady. You should see this message in Visual Studio's output window." 'UI Task' (Managed): Loaded 'Microsoft.Xna.Framework.dll' Log:"playMP3() counter 0" 'UI Task' (Managed): Loaded 'System.SR.dll' Log:"media on status :: {\"id\": \"fa123123-bc55-a266-f447-8881bd32e2aa\", \"msg\": 1, \"value\": 1}" A first chance exception of type 'System.ArgumentException' occurred in mscorlib.dll Log:"media on status :: {\"id\": \"fa123123-bc55-a266-f447-8881bd32e2aa\", \"msg\": 1, \"value\": 2}" Log:"media on status :: {\"id\": \"fa123123-bc55-a266-f447-8881bd32e2aa\", \"msg\": 2, \"value\": 2.141}" Log:"media on status :: {\"id\": \"fa123123-bc55-a266-f447-8881bd32e2aa\", \"msg\": 1, \"value\": 4}" Log:"playMP3() counter 1" A first chance exception of type 'System.ArgumentException' occurred in mscorlib.dll A first chance exception of type 'System.IO.IOException' occurred in mscorlib.dll A first chance exception of type 'System.IO.IsolatedStorage.IsolatedStorageException' occurred in mscorlib.dll Log:"media on status :: {\"id\": \"2de3388c-bbb6-d896-9e27-660f1402bc2a\", \"msg\": 9, \"value\": 5}" My Config: cordova-1.6.1.js Lumia 800 WP 7.5 (7.10.7740.16) WorkAround (kind of): Desactivate the app (turn off the screen) reactivate the app (turn on the screen) - you get one more shot. Any help is welcome as I am blocked on this since may days and I found no usefull information anywhere. Also, Can you tell me if this code work on your config ? . . . Update: add a demo code using a global var. Keeping the instance alive. result The test2.mp3 is played and can replay fine. the test.mp3 is not played at all. It is the first file you play that will work. Code function onDeviceReady() { document.getElementById("welcomeMsg").innerHTML += "Cordova is ready! version=" + window.device.cordova; console.log("onDeviceReady. You should see this message in Visual Studio's output window."); my_media = new Media("app/www/test.mp3");//ressource buildAction == content my_media2 = new Media("app/www/test2.mp3");//ressource buildAction == content } var playCounter = 0; var my_media = null; function playMP3(){ console.log("playMP3() counter " + playCounter); my_media.play(); playCounter++; } var my_media2 = null; function playMP32(){ console.log("playMP32() counter " + playCounter); my_media2.play(); playCounter++; } </script> [...] <p onclick="playMP3();">Click to Play MP3</p> <p onclick="playMP32();">Click to Play MP3 2</p> VS output: Log:"onDeviceReady. You should see this message in Visual Studio's output window." INFO: startPlayingAudio could not find mediaPlayer for 71888b14-86fe-4769-95c9-a9bb05d5555b Log:"playMP32() counter 0" INFO: startPlayingAudio could not find mediaPlayer for 71888b14-86fe-4769-95c9-a9bb05d5555b Log:"playMP32() counter 1" Log:"playMP3() counter 2" INFO: startPlayingAudio could not find mediaPlayer for b60fa266-d105-a295-a5be-fa2c6b824bc1 A first chance exception of type 'System.ArgumentException' occurred in System.Windows.dll Error: El parámetro es incorrecto. Log:"playMP32() counter 3" INFO: startPlayingAudio could not find mediaPlayer for 71888b14-86fe-4769-95c9-a9bb05d5555b Can anybody reproduce this ? link to bug report: https://issues.apache.org/jira/browse/CB-941

    Read the article

  • log-back and thirdparty writing to stdout. How to stop them getting interleaved.

    - by David Roussel
    First some background. I have a batch-type java process run from a DOS batch script. All the java logging goes to stdout, and the batch script redirects the stdout to a file. (This is good for me because I can ECHO from the script and it gets into the log file, so I can see all the java JVM command line args, which is great for debugging.) I may not I use slf4j API, and for the backend I used to use log4j, but recently switched to logback-classic. Although all my application code uses slf4j, I have a third party library that does it's own logging (not using a standard API) which also gets written to stdout. The problem is that sometimes log lines get mixed up and don't cleanly appear on separate lines. Here is an example of some messed up output: 2010-05-28 18:00:44.783 [thread-1 ] INFO CreditCorrelationElementBuilderImpl - Bump parameters exist for scenario, now attempting bumping. [indexDisplayName=STANDARD_S1_v300] 2010-05-28 18:01:43.517 [thread-1 ] INFO CreditCorrelationElementBuilderImpl - Found adjusted point in data, now applying bump. [point=0.144040000000000] 2010-05-28 18:01:58.642 [thread-1 ] DEBUG com.company.request.Request - Generated request for [dealName=XXX_20050225_01[5],dealType=GENERIC_XXX,correlationType=2,copulaType=1] in 73.8 s, Simon Stopwatch: [sys1.batchpricer.reqgen.gen INHERIT] total 1049 s, counter 24, max 74.1 s, min 212 ms 2010-05-28 18:05/28/10 18:02:20.236 INFO: [ServiceEvent] SubmittedTask:BC-STRESS_04_FZBC-2010-05-21-545024448189310126-23 01:58.658 [req-writer-2b ] INFO .c.g.r.o.OptionalFileDocumentOutput - Writing request XML to \\filserver\dir\file1.xml - write time: 21.4 ms - Simon Stopwatch: [sys1.batchpricer.reqgen.writeinputfile INHERIT] total 905 ms, counter 24, max 109 ms, min 10.8 ms 2010-05-28 18:02:33.626 [ResponseCallbacks-1: DriverJobSpace$TakeJobRunner$1] ERROR c.c.s.s.D.CalculatorCallback - Id:23 no deal found !! 2010-0505/28/10 18:02:50.267 INFO: [ServiceEvent] CompletedTask:BC-STRESS_04_FZBC-2010-05-21-545024448189310126-23:Total:24 Now comparing back to older log files, it seems the problem didn't occur when using log4j as the logging backend. So logback must be doing something different. The problem seems to be that although PrintStream.write(byte buf[], int off, int len) is synchronized, however I can see in ch.qos.logback.core.joran.spi.ConsoleTarget that System.out.write(int b) is the only write method called. So inbetween logback outputting each byte, the thirdparty library is managing to write a whole string to the stdout. (Not only is this cause me a problem, but it must also be a little inefficient?) Is there any other fix to this interleaving problem than patching the code to ConsoleTarget so it implments the other write methods? Any nice work arounds. Or should I just file a bug report? Here is my logback.xml: <configuration> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%-16thread] %-5level %-35.35logger{30} - %msg%n</pattern> </encoder> </appender> <root level="DEBUG"> <appender-ref ref="STDOUT" /> </root> </configuration> I'm using logback 0.9.20 with java 1.6.0_07.

    Read the article

  • Cannot get focus on new opened tab with selenium IDE

    - by Goueg83460
    I'm trying to create some web test with selenium IDE. But I have one problem when I click on a javascript link it opened a new tab. I need perform some check on this new tab but I can't get he focus that is still in main page. I tried several things that I'ad search on google without succeed to do it works. I hope that someone can help me. Thanks in advance. Update: So I tried several things and I tink I'm on a good way. I can get windows names with : StoreAllWindowNames names echo names=${name} I have something like: , 987dfg4545sdfgsd It seems that value before "," is the NULL so the default page and the other value is the name of my page. But I'm not able to open it with a selectWindow. Does someone know how should I do it ?? Thanks in advance. More info about my selenium tests: <tr> <td>setSpeed</td> <td>1000</td> <td></td> </tr> <tr> <td>selectWindow</td> <td>null</td> <td></td> </tr> <tr> <td>click</td> <td>link=Show Tree...</td> <td></td> </tr> <tr> <td>storeAllWindowNames</td> <td>names</td> <td>array</td> </tr> <tr> <td>echo</td> <td>${names}</td> <td></td> </tr> <tr> <td>waitForPopUp</td> <td>${names}</td> <td>30000</td> </tr> <tr> <td>selectWindow</td> <td>name=${names}</td> <td></td> </tr> <tr> <td>clickAndWait</td> <td>link=Search</td> <td></td> </tr> Results: * [info] Executing: |setSpeed | 1000 | | * [info] Executing: |selectWindow | null | | * [info] Executing: |click | link=Show Tree... | | * [info] Executing: |storeAllWindowNames | names | array | * [info] Executing: |echo | ${names} | | * [info] echo: ,bdae1e119a367a54 * [info] Executing: |waitForPopUp | ${names} | 30000 | * [error] Timed out after 30000ms * [info] Executing: |selectWindow | name=${names} | | * [error] Window does not exist. If this looks like a Selenium bug, make sure to read http://seleniumhq.org/docs/04_selenese_commands.html#alerts-popups-and-multiple-windows for potential workarounds. Where bdae1e119a367a54 is the dynamic value that I want to get. I found a mach that someone done but it does not works for me it return null http://old.nabble.com/How-can-I-access-the-second,-third..-element-of-a-stored-array--td9393201.html

    Read the article

  • ldirectord ipvsadm not show reals ip and not work wtih pacemaker and corosync

    - by miguer27
    first thanks for your time. I'm having a problem with ldirectord that I can not solve, I comment my situation: I have two nodes with pace maker and corosync and configure somes resources: root@ldap1:/home/mamartin# crm status Last updated: Tue Jun 3 12:58:30 2014 Last change: Tue Jun 3 12:23:47 2014 via cibadmin on ldap1 Stack: openais Current DC: ldap2 - partition with quorum Version: 1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff 2 Nodes configured, 2 expected votes 7 Resources configured. Online: [ ldap1 ldap2 ] Resource Group: IPV_LVS IPV_4 (ocf::heartbeat:IPaddr2): Started ldap1 IPV_6 (ocf::heartbeat:IPv6addr): Started ldap1 lvs (ocf::heartbeat:ldirectord): Started ldap1 Clone Set: clon_IPV_lo [IPV_lo] Started: [ ldap2 ] Stopped: [ IPV_lo:1 ] root@ldap1:/home/mamartin# crm configure show node ldap2 \ attributes standby="off" node ldap1 \ attributes standby="off" primitive IPV-lo_4 ocf:heartbeat:IPaddr \ params ip="192.168.1.10" cidr_netmask="32" nic="lo" \ op monitor interval="5s" primitive IPV-lo_6 ocf:heartbeat:IPv6addrLO \ params ipv6addr="[fc00:1::3]" cidr_netmask="64" \ op monitor interval="5s" primitive IPV_4 ocf:heartbeat:IPaddr2 \ params ip="192.168.1.10" nic="eth0" cidr_netmask="25" lvs_support="true" \ op monitor interval="5s" primitive IPV_6 ocf:heartbeat:IPv6addr \ params ipv6addr="[fc00:1::3]" nic="eth0" cidr_netmask="64" \ op monitor interval="5s" primitive lvs ocf:heartbeat:ldirectord \ params configfile="/etc/ldirectord.cf" \ op monitor interval="20" timeout="10" \ meta target-role="Started" group IPV_LVS IPV_4 IPV_6 lvs group IPV_lo IPV-lo_6 IPV-lo_4 clone clon_IPV_lo IPV_lo \ meta interleave="true" target-role="Started" location cli-prefer-IPV_LVS IPV_LVS \ rule $id="cli-prefer-rule-IPV_LVS" inf: #uname eq ldap1 colocation LVS_no_IPV_lo -inf: clon_IPV_lo IPV_LVS property $id="cib-bootstrap-options" \ dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" \ cluster-infrastructure="openais" \ expected-quorum-votes="2" \ no-quorum-policy="ignore" \ stonith-enabled="false" \ last-lrm-refresh="1401264327" rsc_defaults $id="rsc-options" \ resource-stickiness="1000" The problem is in the ipvsadm only show a one real IP, when i configured two now, show the ldirector.cf: root@ldap1:/home/mamartin# ipvsadm IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags - RemoteAddress:Port Forward Weight ActiveConn InActConn TCP ldap-maqueta.cica.es:ldap wrr - ldap2.cica.es:ldap Route 4 0 0 TCP [[fc00:1::3]]:ldap wrr - [[fc00:1::2]]:ldap Route 4 0 0 root@ldap1:/home/mamartin# cat /etc/ldirectord.cf checktimeout=10 checkinterval=2 autoreload=yes logfile="/var/log/ldirectord.log" quiescent=yes #ipv4 virtual=192.168.1.10:389 real=192.168.1.11:389 gate 4 real=192.168.1.12:389 gate 4 scheduler=wrr protocol=tcp checktype=on #ipv6 virtual6=[[fc00:1::3]]:389 real6=[[fc00:1::1]]:389 gate 4 real6=[[fc00:1::2]]:389 gate 4 scheduler=wrr protocol=tcp checkport=389 checktype=on and in the logs I see nothing clear: root@ldap1:/home/mamartin# ldirectord -d /etc/ldirectord.cf start DEBUG2: Running system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.11:389 -g -w 0) Running system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.11:389 -g -w 0) DEBUG2: Quiescent real server: 192.168.1.11:389 (192.168.1.10:389) (Weight set to 0) Quiescent real server: 192.168.1.11:389 (192.168.1.10:389) (Weight set to 0) DEBUG2: Disabled real server=on:tcp:192.168.1.11:389:::4:gate:\/: (virtual=tcp:192.168.1.10:389) DEBUG2: Running system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.12:389 -g -w 0) Running system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.12:389 -g -w 0) DEBUG2: Quiescent real server: 192.168.1.12:389 (192.168.1.10:389) (Weight set to 0) Quiescent real server: 192.168.1.12:389 (192.168.1.10:389) (Weight set to 0) DEBUG2: Disabled real server=on:tcp:192.168.1.12:389:::4:gate:\/: (virtual=tcp:192.168.1.10:389) DEBUG2: Checking on: Real servers are added without any checks DEBUG2: Resetting soft failure count: 192.168.1.12:389 (tcp:192.168.1.10:389) Resetting soft failure count: 192.168.1.12:389 (tcp:192.168.1.10:389) DEBUG2: Running system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.12:389 -g -w 4) Running system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.12:389 -g -w 4) Destination already exists root@ldap1:/home/mamartin# cat /var/log/ldirectord.log [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Quiescent real server: 192.168.1.11:389 (192.168.1.10:389) (Weight set to 0) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Quiescent real server: 192.168.1.12:389 (192.168.1.10:389) (Weight set to 0) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Resetting soft failure count: 192.168.1.12:389 (tcp:192.168.1.10:389) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.12:389 -g -w 4) failed: [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Added real server: 192.168.1.12:389 (192.168.1.10:389) (Weight set to 4) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Resetting soft failure count: 192.168.1.11:389 (tcp:192.168.1.10:389) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Restored real server: 192.168.1.11:389 (192.168.1.10:389) (Weight set to 4) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Resetting soft failure count: [[fc00:1::2]]:389 (tcp:[[fc00:1::3]]:389) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] system(/sbin/ipvsadm -a -t [[fc00:1::3]]:389 -r [[fc00:1::2]]:389 -g -w 4) failed: [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Added real server: [[fc00:1::2]]:389 ([[fc00:1::3]]:389) (Weight set to 4) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Resetting soft failure count: [[fc00:1::1]]:389 (tcp:[[fc00:1::3]]:389) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Restored real server: [[fc00:1::1]]:389 ([[fc00:1::3]]:389) (Weight set to 4) do not know if this is a bug or a configuration error, can anyone help? Regards.

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >