Search Results

Search found 10055 results on 403 pages for '11 1 2 1'.

Page 73/403 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Does Apache ever give incorrect "out of threads" errors?

    - by Eli Courtwright
    Lately our Apache web server has been giving us this error multiple times per day: [Tue Apr 06 01:07:10 2010] [error] Server ran out of threads to serve requests. Consider raising the ThreadsPerChild setting We raised our ThreadsPerChild setting from 50 to 100, but we still get the error. Our access logs indicate that these errors never even happen at periods of high load. For example, here's an excerpt from our access log (ip addresses and some urls are edited for privacy). As you can see, the above error happened at 1:07 and only a small handful of requests occurred in the several minutes leading up to the error: 99.88.77.66 - - [06/Apr/2010:00:59:33 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-icons_222222_256x240.png HTTP/1.1" 304 - 99.88.77.66 - - [06/Apr/2010:00:59:34 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-bg_glass_75_dadada_1x400.png HTTP/1.1" 200 111 99.88.77.66 - - [06/Apr/2010:00:59:34 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-bg_glass_75_dadada_1x400.png HTTP/1.1" 200 111 99.88.77.66 - mpeu [06/Apr/2010:00:59:40 -0400] "GET /some/dynamic/content HTTP/1.1" 200 145049 55.44.33.22 - mpeu [06/Apr/2010:01:06:56 -0400] "GET /other/dynamic/content HTTP/1.1" 200 12311 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/jquery-ui-1.7.1.custom.css HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/js/jquery-1.3.2.min.js HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/js/jquery-ui-1.7.1.custom.min.js HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/jquery.tablesorter.min.js HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/date.js HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image1.gif HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image2.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image3.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image4.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image5.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image6.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image7.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:57 -0400] "GET /WebRepository/pdfs/image8.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:57 -0400] "GET /WebRepository/pdfs/image9.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:57 -0400] "GET /WebRepository/pdfs/imageA.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:57 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-bg_flat_75_ffffff_40x100.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:59 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-bg_highlight-soft_75_cccccc_1x100.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:59 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-bg_glass_75_e6e6e6_1x400.png HTTP/1.1" 200 110 55.44.33.22 - - [06/Apr/2010:01:06:59 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-bg_glass_75_e6e6e6_1x400.png HTTP/1.1" 200 110 11.22.33.44 - mpeu [06/Apr/2010:01:18:03 -0400] "GET /other/dynamic/content HTTP/1.1" 200 12311 11.22.33.44 - - [06/Apr/2010:01:18:03 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/js/jquery-1.3.2.min.js HTTP/1.1" 304 - 11.22.33.44 - - [06/Apr/2010:01:18:04 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/jquery-ui-1.7.1.custom.css HTTP/1.1" 200 27374 11.22.33.44 - - [06/Apr/2010:01:18:04 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/js/jquery-ui-1.7.1.custom.min.js HTTP/1.1" 304 - 11.22.33.44 - - [06/Apr/2010:01:18:04 -0400] "GET /WebRepository/jquery.tablesorter.min.js HTTP/1.1" 200 12795 11.22.33.44 - - [06/Apr/2010:01:18:04 -0400] "GET /WebRepository/date.js HTTP/1.1" 200 25809 For what it's worth, we're running the version of Apache that ships with Oracle 10g (some 2.0 version), and we're using mod_plsql to generate our dynamic content. Since the Apache server runs as a separate process and the database doesn't record any problems when this error occurs, I'm doubtful that Oracle is the problem. Unfortunately, the errors are freaking out our sysadmins, who are inclined to blame any and all problems which occur with the server on this error. Is this a known bug in Apache that I simply haven't been able to find any reference to through Google?

    Read the article

  • Perl IO modules possibly causing issues in Net::DNS module

    - by Rich
    Hi! I’m porting some software that I wrote for a White Russian OpenWRT system to a new Kamikaze 8.09.1 OpenWRT system but I am having some serious issues that I’m hoping you can help me with. Old system Linux kernel 2.4.34 MIPSEL arch Perl 5.8.7 Net::DNS 0.48 IO 1.21 IO::Socket 1.28 IO::Socket::INET 1.28 New system Linux kernel 2.6.26.8 MIPS arch Perl 5.10.0 Net::DNS 0.66 IO 1.23_01 IO::Socket 1.30_01 IO::Socket::INET 1.31 First, let me provide some background information… I am trying to resolve my server (clearprobe.winbeam.com) from within my Perl program and see the following if I enable debugging in Net::DNS: resolve: Server 'clearprobe-ddns.winbeam.com' ;; query(clearprobe-ddns.winbeam.com) ;; setting up an AF_INET() family type UDP socket ;; send_udp(192.168.88.1:53) ;; send_udp(4.2.2.2:53) ;; send_udp(192.168.88.1:53) ;; send_udp(4.2.2.2:53) resolve: res->errorstring: query timed out Both of these servers resolve clearprobe.winbeam.com fine from the command line: root@cwb-2-11:~# echo “nameserver 192.168.88.1” > /etc/resolv.conf root@cwb-2-11:~# nslookup clearprobe-ddns.winbeam.com Server: 192.168.88.1 Address 1: 192.168.88.1 router Name: clearprobe-ddns.winbeam.com Address 1: 64.13.48.40 64-13-48-40.war.clearwire-dns.net root@cwb-2-11:~# echo “nameserver 4.2.2.2” > /etc/resolv.conf root@cwb-2-11:~# nslookup clearprobe-ddns.winbeam.com Server: 4.2.2.2 Address 1: 4.2.2.2 vnsc-bak.sys.gtei.net Name: clearprobe-ddns.winbeam.com Address 1: 64.13.48.40 64-13-48-40.war.clearwire-dns.net Using Perl’s call to the C gethostbyaddr() function works fine, but I need to do another lookup later in the software which requires that I specify the nameserver (clearprobe-ddns.winbeam.com is the authority for my internal DNS zone), hence my Net::DNS requirement. Now, here is the IO module-specific information: What I am seeing is that the reply is coming back from the nameserver (confirmed via tcpdump – I can send the captures if you’d like), but the UDP packets are sitting in the process’s UDP receive queue pending reception by Net::DNS (the approx 1752 bytes per response stay queued waiting for $sel-can_read()): root@cwb-2-11:~# netstat -una Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State udp 1752 0 0.0.0.0:52680 0.0.0.0:* root@cwb-2-11:~# netstat -una Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State udp 5256 0 0.0.0.0:52680 0.0.0.0:* If I force $sock[AF_INET]-recv($buf, $self-_packetsz) around line 803 of /usr/lib/perl5/5.10/Net/DNS/Resolver/Base.pm, instead of waiting for IO::Select’s can_read() function ( @ready = $sel-can_read($timeout)) to populate @ready, the response is received and processed. Any idea what could be causing this issue? In a possibly related matter, I noticed in another script that the following code fails in the same manner (network responses stay in the process’s TCP receive queue) with the new system: $sock = new IO::Socket::INET( PeerAddr => "$server", PeerPort => 37, Proto => 'tcp', Timeout => 5 ); Whereas the following code works: $sock = new IO::Socket::INET( PeerAddr => "$server", PeerPort => 37, Proto => 'tcp' ); I have looked through the NET::DNS code and don’t see a timeout passed for the UDP sockets, so I am not sure if that this is related or not. Please let me know if I can provide you with any further information in order to help diagnose this issue. Thanks! -Rich

    Read the article

  • How do I get confidence intervals without inverting a singular Hessian matrix?

    - by AmalieNot
    Hello. I recently posted this to reddit and it was suggested I come here, so here I am. I'm a student working on an epidemiology model in R, using maximum likelihood methods. I created my negative log likelihood function. It's sort of gross looking, but here it is: NLLdiff = function(v1, CV1, v2, CV2, st1 = (czI01 - czV01), st2 = (czI02 - czV02), st01 = czI01, st02 = czI02, tt1 = czT01, tt2 = czT02) { prob1 = (1 + v1 * CV1 * tt1)^(-1/CV1) prob2 = ( 1 + v2 * CV2 * tt2)^(-1/CV2) -(sum(dbinom(st1, st01, prob1, log = T)) + sum(dbinom(st2, st02, prob2, log = T))) } The reason the first line looks so awful is because most of the data it takes is inputted there. czI01, for example, is already declared. I did this simply so that my later calls to the function don't all have to have awful vectors in them. I then optimized for CV1, CV2, v1 and v2 using mle2 (library bbmle). That's also a bit gross looking, and looks like: ml.cz.diff = mle2 (NLLdiff, start=list(v1 = vguess, CV1 = cguess, v2 = vguess, CV2 = cguess), method="L-BFGS-B", lower = 0.0001) Now, everything works fine up until here. ml.cz.diff gives me values that I can turn into a plot that reasonably fits my data. I also have several different models, and can get AICc values to compare them. However, when I try to get confidence intervals around v1, CV1, v2 and CV2 I have problems. Basically, I get a negative bound on CV1, which is impossible as it actually represents a square number in the biological model as well as some warnings. The warnings are this: http://i.imgur.com/B3H2l.png . Is there a better way to get confidence intervals? Or, really, a way to get confidence intervals that make sense here? What I see happening is that, by coincidence, my hessian matrix is singular for some values in the optimization space. But, since I'm optimizing over 4 variables and don't have overly extensive programming knowledge, I can't come up with a good method of optimization that doesn't rely on the hessian. I have googled the problem - it suggested that my model's bad, but I'm reconstructing some work done before which suggests that my model's really not awful (the plots I make using the ml.cz.diff look like the plots of the original work). I have also read the relevant parts of the manual as well as Bolker's book Ecological Models in R. I have also tried different optimization methods, which resulted in a longer run time but the same errors. The "SANN" method didn't finish running within an hour, so I didn't wait around to see the result. tl;dr : my confidence intervals are bad, is there a relatively straightforward way to fix them in R. My vectors are: czT01 = c(5, 5, 5, 5, 5, 5, 5, 25, 25, 25, 25, 25, 25, 25, 50, 50, 50, 50, 50, 50, 50) czT02 = c(5, 5, 5, 5, 5, 10, 10, 10, 10, 10, 25, 25, 25, 25, 25, 50, 50, 50, 50, 50, 75, 75, 75, 75, 75) czI01 = c(25, 24, 22, 22, 26, 23, 25, 25, 25, 23, 25, 18, 21, 24, 22, 23, 25, 23, 25, 25, 25) czI02 = c(13, 16, 5, 18, 16, 13, 17, 22, 13, 15, 15, 22, 12, 12, 13, 13, 11, 19, 21, 13, 21, 18, 16, 15, 11) czV01 = c(1, 4, 5, 5, 2, 3, 4, 11, 8, 1, 11, 12, 10, 16, 5, 15, 18, 12, 23, 13, 22) czV02 = c(0, 3, 1, 5, 1, 6, 3, 4, 7, 12, 2, 8, 8, 5, 3, 6, 4, 6, 11, 5, 11, 1, 13, 9, 7) and I get my guesses by: v = -log((c(czI01, czI02) - c(czV01, czV02))/c(czI01, czI02))/c(czT01, czT02) vguess = mean(v) cguess = var(v)/vguess^2 It's also possible that I'm doing something else completely wrong, but my results seem reasonable so I haven't caught it.

    Read the article

  • RavenDB - Can I create/query an index, from an application that doesn't share the CLR objects that were persisted?

    - by Jaz Lalli
    This post goes some way to answering this question (I'll include the answer later), but I was hoping for some further details. We have a number of applications that each need to access/manipulate data from Raven in their own way. Data is only written via the main web application. Other apps include batch-style tasks, reporting etc. In an attempt to keep each of these as de-coupled as possible, they are separate solutions. That being the case, how can I, from the reporting application, create indexes over the existing data, using my locally defined types? The answer from the linked question states As long as the structure of the classes you are deserializing into partially matches the structure of the data, it shouldn't make a difference. The RavenDB server doesn't care at all what classes you use in the client. You certainly could share a dll, or even share a portable dll if you are targeting a different platform. But you are correct that it is not necessary. However, you should be aware of the Raven-Clr-Type metadata value. The RavenDB client sets this when storing the original document. It is consumed back by the client to assist with deserialization, but it is not fully enforced It's the first part of that that I wanted clarification on. Do the object graphs for the docs on the server and types in my application have to match exactly? If the Click document on the server is { "Visit": { "Version": "0", "Domain": "www.mydomain.com", "Page": "/index", "QueryString": "", "IPAddress": "127.0.0.1", "Guid": "10cb6886-cb5c-46f8-94ed-4b0d45a5e9ca", "MetaData": { "Version": "1", "CreatedDate": "2012-11-09T15:11:03.5669038Z", "UpdatedDate": "2012-11-09T15:11:03.5669038Z", "DeletedDate": null } }, "ResultId": "Results/1", "ProductCode": "280", "MetaData": { "Version": "1", "CreatedDate": "2012-11-09T15:14:26.1332596Z", "UpdatedDate": "2012-11-09T15:14:26.1332596Z", "DeletedDate": null } } Is it possible (and if so, how?), to create a Map index from my application, which defines the Click class as follows? class Click { public Guid Guid {get;set;} public int ProductCode {get;set;} public DateTime CreatedDate {get;set;} } Or would my class have to be look like this? (where the custom types are defined as a sub-set of the properties on the document above, with matching property names) class Click { public Visit Visit {get;set;} public int ProductCode {get;set;} public MetaData MetaData {get;set;} } UPDATE Following on from the answer below, here's the code I managed to get working. Index public class Clicks_ByVisitGuidAndProductCode : AbstractIndexCreationTask { public override IndexDefinition CreateIndexDefinition() { return new IndexDefinition { Map = "from click in docs.Clicks select new {Guid = click.Visit.Guid, ProductCode = click.ProductCode, CreatedDate = click.MetaData.CreatedDate}", TransformResults = "results.Select(click => new {Guid = click.Visit.Guid, ProductCode = click.ProductCode, CreatedDate = click.MetaData.CreatedDate})" }; } } Query var query = _documentSession.Query<ReportClick, Clicks_ByVisitGuidAndProductCode>() .Customize(x => x.WaitForNonStaleResultsAsOfNow()) .Where(x => x.CreatedDate >= start.Date && x.CreatedDate < end.Date); where Click is public class Click { public Guid Guid { get; set; } public int ProductCode { get; set; } public DateTime CreatedDate { get; set; } } Many thanks @MattJohnson.

    Read the article

  • Please give the solution of the following programs in R Programming

    - by NEETHU
    Table below gives data concerning the performance of 28 national football league teams in 1976.It is suspected that the no. of yards gained rushing by opponents(x8) has an effect on the no. of games won by a team(y) (a)Fit a simple linear regression model relating games won by y to yards gained rushing by opponents x8. (b)Construct the analysis of variance table and test for significance of regression. (c)Find a 95% CI on the slope. (d)What percent of the total variability in y is explained by this model. (e)Find a 95% CI on the mean number of games won in opponents yards rushing is limited to 2000 yards. Team y x8 1 10 2205 2 11 2096 3 11 1847 4 13 1803 5 10 1457 6 11 1848 7 10 1564 8 11 1821 9 4 2577 10 2 2476 11 7 1984 12 10 1917 13 9 1761 14 9 1709 15 6 1901 16 5 2288 17 5 2072 18 5 2861 19 6 2411 20 4 2289 21 3 2203 22 3 2592 23 4 2053 24 10 1979 25 6 2048 26 8 1786 27 2 2876 28 0 2560 Suppose we would like to use the model developed in problem 1 to predict the no. of games a team will win if it can limit opponents yards rushing to 1800 yards. Find a point estimate of the no. of games won when x8=1800.Find a 905 prediction interval on the no. of games won. The purity of Oxygen produced by a fractionation process is thought to be percentage of Hydrocarbon in the main condenser of the processing unit .20 samples are shown below. Purity(%) Hydrocarbon(%) 86.91 1.02 89.85 1.11 90.28 1.43 86.34 1.11 92.58 1.01 87.33 0.95 86.29 1.11 91.86 0.87 95.61 1.43 89.86 1.02 96.73 1.46 99.42 1.55 98.66 1.55 96.07 1.55 93.65 1.4 87.31 1.15 95 1.01 96.85 0.99 85.2 0.95 90.56 0.98 (a)Fit a simple linear regression model to the data. (b)Test the hypothesis H0:ß=0 (c)Calculate R2 . (d)Find a 95% CI on the slope. (e)Find a 95% CI on the mean purity and the Hydrocarbon % is 1. Consider the Oxygen plant data in Problem3 and assume that purity and Hydrocarbon percentage are jointly normally distributed r.vs (a)What is the correlation between Oxygen purity and Hydrocarbon% (b)Test the hypothesis that ?=0. (c)Construct a 95% CI for ?.

    Read the article

  • How to fix these compiler errors?

    - by Sandra Schlichting
    I have this source code from 2001 that I would like to compile. It gives this: $ make g++ -O99 -Wall -DLINUX -pedantic -c -o audio.o audio.cpp In file included from audio.cpp:7: audio.h:14: error: use of enum ‘mad_flow’ without previous declaration audio.h:15: error: use of enum ‘mad_flow’ without previous declaration audio.h:17: error: use of enum ‘mad_flow’ without previous declaration audio.cpp: In function ‘mad_flow audio::input(void*, mad_stream*)’: audio.cpp:19: error: new declaration ‘mad_flow audio::input(void*, mad_stream*)’ audio.h:14: error: ambiguates old declaration ‘int audio::input(void*, mad_stream*)’ audio.h:11: error: ‘size_t audio::stream::BufferPos’ is private audio.cpp:23: error: within this context audio.h:11: error: ‘size_t audio::stream::BufferSize’ is private audio.cpp:23: error: within this context audio.h:10: error: ‘char* audio::stream::Buffer’ is private audio.cpp:26: error: within this context audio.h:11: error: ‘size_t audio::stream::BufferSize’ is private audio.cpp:26: error: within this context audio.h:11: error: ‘size_t audio::stream::BufferPos’ is private audio.cpp:27: error: within this context audio.h:11: error: ‘size_t audio::stream::BufferSize’ is private audio.cpp:27: error: within this context audio.cpp: In function ‘mad_flow audio::output(void*, const mad_header*, mad_pcm*)’: audio.cpp:49: error: new declaration ‘mad_flow audio::output(void*, const mad_header*, mad_pcm*)’ audio.h:15: error: ambiguates old declaration ‘int audio::output(void*, const mad_header*, mad_pcm*)’ audio.cpp: In function ‘mad_flow audio::error(void*, mad_stream*, mad_frame*)’: audio.cpp:83: error: new declaration ‘mad_flow audio::error(void*, mad_stream*, mad_frame*)’ audio.h:17: error: ambiguates old declaration ‘int audio::error(void*, mad_stream*, mad_frame*)’ audio.cpp: In constructor ‘audio::stream::stream(const char*)’: audio.cpp:119: error: ‘input’ was not declared in this scope audio.cpp:122: error: ‘output’ was not declared in this scope audio.cpp:123: error: ‘error’ was not declared in this scope make: *** [audio.o] Error 1 audio.h contains #include <stdlib.h> #include "mad.h" namespace audio { class stream { private: char* Buffer; size_t BufferSize, BufferPos; struct mad_decoder Decoder; friend enum mad_flow input(void* Data, struct mad_stream* MadStream); friend enum mad_flow output(void* Data, const struct mad_header* Header, struct mad_pcm* PCM); friend enum mad_flow error(void* Data, struct mad_stream* MadStream, struct mad_frame* Frame); public: stream(const char* FileName); ~stream(); void play(); }; } I have tried to just insert enum mad_flow {}; but that just gave a new problem. Can anyone see how to fix this?

    Read the article

  • Flex 4 IntelliJ IDEA wrapper html crashes browser

    - by user347641
    I'm building an Flex 4 application using IntelliJ IDEA generated sample Flex application. I replace the mxml with the following code from the book Hello Flex 4. It simply crashes the browser when I run it. I tried it on both FF and Chrome. Any clues? [Bindable] public var _bread:Number = Number.NaN; ]]></fx:Script> <fx:Declarations> <s:RadioButtonGroup id="moralityRBG"/> <s:RadioButtonGroup id="restaurantRBG" selectedValue="{_theory.length % 2 == 0 ? 'smoking' : 'non'}"/> </fx:Declarations> <s:Panel width="100%" height="100%" title="Simple Components!"> <s:layout> <s:HorizontalLayout paddingLeft="5" paddingTop="5"/> </s:layout> <s:VGroup> <s:TextArea id="textArea" width="200" height="50" text="@{_theory}"/> <s:TextInput id="textInput" width="200" text="@{_theory}"/> <s:HSlider id="hSlider" minimum="0" maximum="11" liveDragging="true" width="200" value="@{_bread}"/> <s:VSlider id="vSlider" minimum="0" maximum="11" liveDragging="true" height="50" value="@{_bread}"/> <s:Button label="{_theory}" width="200" color="{alarmTB.selected ? 0xFF0000 : 0}" click="_bread = Math.min(_theory.length, 11)"/> <s:CheckBox id="checkBox" selected="{_bread % 2 == 0}" label="even?"/> </s:VGroup> <s:VGroup> <s:RadioButton label="Good" value="good" group="{moralityRBG}"/> <s:RadioButton label="Evil" value="evil" group="{moralityRBG}"/> <s:RadioButton label="Beyond" value="beyond" group="{moralityRBG}"/> <s:RadioButton label="Smoking" value="smoking" group="{restaurantRBG}"/> <s:RadioButton label="Non-Smoking" value="non" group="{restaurantRBG}"/> <s:ToggleButton id="alarmTB" label="ALARM!"/> <s:NumericStepper id="numericStepper" value="@{_bread}" minimum="0" maximum="11" stepSize="1"/> <s:Spinner id="spinner" value="@{_bread}" minimum="0" maximum="11" stepSize="1"/> </s:VGroup> </s:Panel>

    Read the article

  • VBA/SQL recordsets

    - by intruesiive
    The project I'm asking about is for sending an email to teachers asking what books they're using for the classes they're teaching next semester, so that the books can be ordered. I have a query that compares the course number of this upcoming semester's classes to the course numbers of historical textbook orders, pulling out only those classes that are being taught this semester. That's where I get lost. I have a table that contains the following: -Professor -Course Number -Year -Book -Title The data looks like this: professor year course number title smith 13 1111 Pride and Prejudice smith 13 1111 The Fountainhead smith 13 1222 The Alchemist smith 12 1111 Pride and Prejudice smith 11 1222 Infinite Jest smith 10 1333 The Bible smith 13 1333 The Bible smith 12 1222 The Alchemist smith 10 1111 Moby Dick johnson 12 1222 The Tipping Point johnson 11 1333 Anna Kerenina johnson 10 1333 Everything is Illuminated johnson 12 1222 The Savage Detectives johnson 11 1333 In Search of Lost Time johnson 10 1333 Great Expectations johnson 9 1222 Proust on the Shore Here's what I need the code to do "on paper": Group the records by professor. Determine every unique course number in that group, and group records by course number. For each unique course number, determine the highest year associated. Then spit out every record with that professor+course number+year combination. With the sample data, the results would be: professor year course number title smith 13 1111 Pride and Prejudice smith 13 1111 The Fountainhead smith 13 1222 The Alchemist smith 13 1333 The Bible johnson 12 1222 The Tipping Point johnson 11 1333 Anna Kerenina johnson 12 1222 The Savage Detectives johnson 11 1333 In Search of Lost Time I'm thinking I should make a record set for each teacher, and within that, another record set for each course number. Within the course number record set, I need the system to determine what the highest year number is - maybe store that in a variable? Then pull out every associated record so that if the teacher ordered 3 books the last time they taught that class (whether it was in 2013 or 2012 and so on) all three books display. I'm not sure I'm thinking of record sets in the right way, though. My SQL so far is basic and clearly doesn't work: SELECT [All].Professor, [All].Course, Max([All].Year) FROM [All] GROUP BY [All].Professor, [All].Course; Thanks for your help.

    Read the article

  • GROUP BY and SUM distinct date across 2 tables

    - by kenitech
    I'm not sure if this is possible in one mysql query so I might just combine the results via php. I have 2 tables: 'users' and 'billing' I'm trying to group summed activity for every date that is available in these two tables. 'users' is not historical data but 'billing' contains a record for each transaction. In this example I am showing a user's status which I'd like to sum for created date and deposit amounts that I would also like to sum by created date. I realize there is a bit of a disconnect between the data but I'd like to some all of it together and display it as seen below. This will show me an overview of all of the users by when they were created and what the current statuses are next to total transactions. I've tried UNION as well as LEFT JOIN but I can't seem to get either to work. Union example is pretty close but doesn't combine the dates into one row. ( SELECT created, SUM(status) as totalActive, NULL as totalDeposit FROM users GROUP BY created ) UNION ( SELECT created, NULL as totalActive, SUM(transactionAmount) as totalDeposit FROM billing GROUP BY created ) I've also tried using a date lookup table and joining on the dates but the SUM values are being added multiple times. note: I don't care about the userIds at all but have it in here for the example. users table (where status of '1' denotes "active") (one record for each user) created | userId | status 2010-03-01 | 10 | 0 2010-03-01 | 11 | 1 2010-03-01 | 12 | 1 2010-03-10 | 13 | 0 2010-03-12 | 14 | 1 2010-03-12 | 15 | 1 2010-03-13 | 16 | 0 2010-03-15 | 17 | 1 billing table (record created for every instance of a billing "transaction" created | userId | transactionAmount 2010-03-01 | 10 | 50 2010-03-01 | 18 | 50 2010-03-01 | 19 | 100 2010-03-10 | 89 | 55 2010-03-15 | 16 | 50 2010-03-15 | 12 | 90 2010-03-22 | 99 | 150 desired result: created | sumStatusActive | sumStatusInactive | sumTransactions 2010-03-01 | 2 | 1 | 200 2010-03-10 | 0 | 1 | 55 2010-03-12 | 2 | 0 | 0 2010-03-13 | 0 | 0 | 0 2010-03-15 | 1 | 0 | 140 2010-03-22 | 0 | 0 | 150 Table dump: CREATE TABLE IF NOT EXISTS `users` ( `created` date NOT NULL, `userId` int(11) NOT NULL, `status` smallint(6) NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1; INSERT INTO `users` (`created`, `userId`, `status`) VALUES ('2010-03-01', 10, 0), ('2010-03-01', 11, 1), ('2010-03-01', 12, 1), ('2010-03-10', 13, 0), ('2010-03-12', 14, 1), ('2010-03-12', 15, 1), ('2010-03-13', 16, 0), ('2010-03-15', 17, 1); CREATE TABLE IF NOT EXISTS `billing` ( `created` date NOT NULL, `userId` int(11) NOT NULL, `transactionAmount` int(11) NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1; INSERT INTO `billing` (`created`, `userId`, `transactionAmount`) VALUES ('2010-03-01', 10, 50), ('2010-03-01', 18, 50), ('2010-03-01', 19, 100), ('2010-03-10', 89, 55), ('2010-03-15', 16, 50), ('2010-03-15', 12, 90), ('2010-03-22', 99, 150);

    Read the article

  • Please help rails problem with stringify_keys error

    - by richard moss
    I have been trying to solve this for ages and can't figure it out. I have a form like so (taking out a lot of other fields) <% form_for @machine_enquiry, machine_enquiry_path(@machine_enquiry) do|me_form| %> <% me_form.fields_for :messages_attributes do |f| %> <%= f.text_field :title -%> <% end %> <%= me_form.submit 'Send message' %> <% end %> And an update action like @machine_enquiry = MachineEnquiry.find(params[:id]) @machine_enquiry.update_attributes(params[:machine_enquiry] And a machine_enquiry class like so: class MachineEnquiry < ActiveRecord::Base has_many :messages, :as => :messagable, :dependent => :destroy accepts_nested_attributes_for :messages end I am getting an error like so: NoMethodError in Machine enquiriesController#update undefined method `stringify_keys' for "2":String RAILS_ROOT: C:/INSTAN~2/rails_apps/Macrotec28th Application Trace | Framework Trace | Full Trace C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/nested_attributes.rb:294:in `assign_nested_attributes_for_collection_association' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/nested_attributes.rb:293:in `each' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/nested_attributes.rb:293:in `assign_nested_attributes_for_collection_association' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/nested_attributes.rb:215:in `messages_attributes=' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:2745:in `send' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:2745:in `attributes=' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:2741:in `each' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:2741:in `attributes=' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:2627:in `update_attributes' C:/INSTAN~2/rails_apps/Macrotec28th/app/controllers/machine_enquiries_controller.rb:74:in `update' C:/INSTAN~2/rails_apps/Macrotec28th/app/controllers/machine_enquiries_controller.rb:72:in `update' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/nested_attributes.rb:294:in `assign_nested_attributes_for_collection_association' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/nested_attributes.rb:293:in `each' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/nested_attributes.rb:293:in `assign_nested_attributes_for_collection_association' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/nested_attributes.rb:215:in `messages_attributes=' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:2745:in `send' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:2745:in `attributes=' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:2741:in `each' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:2741:in `attributes=' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:2627:in `update_attributes' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/mime_responds.rb:106:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/mime_responds.rb:106:in `respond_to' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/base.rb:1322:in `send' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/base.rb:1322:in `perform_action_without_filters' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/filters.rb:617:in `call_filters' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/filters.rb:610:in `perform_action_without_benchmark' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/benchmarking.rb:68:in `perform_action_without_rescue' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/core_ext/benchmark.rb:17:in `ms' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/core_ext/benchmark.rb:10:in `realtime' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/core_ext/benchmark.rb:17:in `ms' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/benchmarking.rb:68:in `perform_action_without_rescue' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/rescue.rb:160:in `perform_action_without_flash' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/flash.rb:141:in `perform_action' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/base.rb:523:in `send' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/base.rb:523:in `process_without_filters' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/filters.rb:606:in `process' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/base.rb:391:in `process' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/base.rb:386:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/routing/route_set.rb:433:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/dispatcher.rb:88:in `dispatch' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/dispatcher.rb:111:in `_call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/dispatcher.rb:82:in `initialize' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/query_cache.rb:29:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/query_cache.rb:29:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/query_cache.rb:34:in `cache' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/query_cache.rb:9:in `cache' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/query_cache.rb:28:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:361:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/head.rb:9:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/methodoverride.rb:24:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/params_parser.rb:15:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/rewindable_input.rb:25:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/session/cookie_store.rb:93:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/reloader.rb:9:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/failsafe.rb:11:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/lock.rb:11:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/lock.rb:11:in `synchronize' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/lock.rb:11:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/dispatcher.rb:106:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/cgi_process.rb:44:in `dispatch_cgi' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/dispatcher.rb:102:in `dispatch_cgi' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/dispatcher.rb:28:in `dispatch' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel/rails.rb:76:in `process' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel/rails.rb:74:in `synchronize' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel/rails.rb:74:in `process' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel.rb:159:in `process_client' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel.rb:158:in `each' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel.rb:158:in `process_client' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel.rb:285:in `run' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel.rb:285:in `initialize' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel.rb:285:in `new' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel.rb:285:in `run' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel.rb:268:in `initialize' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel.rb:268:in `new' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel.rb:268:in `run' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel/configurator.rb:282:in `run' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel/configurator.rb:281:in `each' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel/configurator.rb:281:in `run' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/mongrel_rails:128:in `run' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel/command.rb:212:in `run' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/mongrel_rails:281 C:/INSTAN~2/ruby/bin/mongrel_rails:19:in `load' C:/INSTAN~2/ruby/bin/mongrel_rails:19 C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/nested_attributes.rb:294:in `assign_nested_attributes_for_collection_association' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/nested_attributes.rb:293:in `each' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/nested_attributes.rb:293:in `assign_nested_attributes_for_collection_association' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/nested_attributes.rb:215:in `messages_attributes=' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:2745:in `send' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:2745:in `attributes=' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:2741:in `each' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:2741:in `attributes=' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:2627:in `update_attributes' C:/INSTAN~2/rails_apps/Macrotec28th/app/controllers/machine_enquiries_controller.rb:74:in `update' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/mime_responds.rb:106:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/mime_responds.rb:106:in `respond_to' C:/INSTAN~2/rails_apps/Macrotec28th/app/controllers/machine_enquiries_controller.rb:72:in `update' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/base.rb:1322:in `send' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/base.rb:1322:in `perform_action_without_filters' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/filters.rb:617:in `call_filters' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/filters.rb:610:in `perform_action_without_benchmark' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/benchmarking.rb:68:in `perform_action_without_rescue' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/core_ext/benchmark.rb:17:in `ms' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/core_ext/benchmark.rb:10:in `realtime' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/core_ext/benchmark.rb:17:in `ms' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/benchmarking.rb:68:in `perform_action_without_rescue' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/rescue.rb:160:in `perform_action_without_flash' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/flash.rb:141:in `perform_action' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/base.rb:523:in `send' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/base.rb:523:in `process_without_filters' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/filters.rb:606:in `process' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/base.rb:391:in `process' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/base.rb:386:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/routing/route_set.rb:433:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/dispatcher.rb:88:in `dispatch' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/dispatcher.rb:111:in `_call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/dispatcher.rb:82:in `initialize' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/query_cache.rb:29:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/query_cache.rb:29:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/query_cache.rb:34:in `cache' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/query_cache.rb:9:in `cache' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/query_cache.rb:28:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:361:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/head.rb:9:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/methodoverride.rb:24:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/params_parser.rb:15:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/rewindable_input.rb:25:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/session/cookie_store.rb:93:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/reloader.rb:9:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/failsafe.rb:11:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/lock.rb:11:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/lock.rb:11:in `synchronize' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/lock.rb:11:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/dispatcher.rb:106:in `call' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/cgi_process.rb:44:in `dispatch_cgi' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/dispatcher.rb:102:in `dispatch_cgi' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/dispatcher.rb:28:in `dispatch' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel/rails.rb:76:in `process' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel/rails.rb:74:in `synchronize' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel/rails.rb:74:in `process' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel.rb:159:in `process_client' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel.rb:158:in `each' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel.rb:158:in `process_client' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel.rb:285:in `run' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel.rb:285:in `initialize' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel.rb:285:in `new' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel.rb:285:in `run' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel.rb:268:in `initialize' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel.rb:268:in `new' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel.rb:268:in `run' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel/configurator.rb:282:in `run' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel/configurator.rb:281:in `each' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel/configurator.rb:281:in `run' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/mongrel_rails:128:in `run' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel/command.rb:212:in `run' C:/INSTAN~2/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/mongrel_rails:281 C:/INSTAN~2/ruby/bin/mongrel_rails:19:in `load' C:/INSTAN~2/ruby/bin/mongrel_rails:19 Request Parameters: {"commit"=>"Send message", "_method"=>"put", "machine_enquiry"=>{"messages_attributes"=>{"message"=>"2", "title"=>"1", "message_type_id"=>"1", "contact_detail_ids"=>["1", "11"]}}, "id"=>"2", "datetime"=>""} Why am I getting this error? Can anyone help with this?

    Read the article

  • Generating a drop down list of timezones with PHP

    - by Xeoncross
    Most sites need some way to show the dates on the site in the users preferred timezone. Below are two lists that I found and then one method using the built in PHP DateTime class in PHP 5. I need help knowing which of these would be the best to attempt to use when trying to get the UTC offset from the user on register. One: <option value="-12">[UTC - 12] Baker Island Time</option> <option value="-11">[UTC - 11] Niue Time, Samoa Standard Time</option> <option value="-10">[UTC - 10] Hawaii-Aleutian Standard Time, Cook Island Time</option> <option value="-9.5">[UTC - 9:30] Marquesas Islands Time</option> <option value="-9">[UTC - 9] Alaska Standard Time, Gambier Island Time</option> <option value="-8">[UTC - 8] Pacific Standard Time</option> <option value="-7">[UTC - 7] Mountain Standard Time</option> <option value="-6">[UTC - 6] Central Standard Time</option> <option value="-5">[UTC - 5] Eastern Standard Time</option> <option value="-4.5">[UTC - 4:30] Venezuelan Standard Time</option> <option value="-4">[UTC - 4] Atlantic Standard Time</option> <option value="-3.5">[UTC - 3:30] Newfoundland Standard Time</option> <option value="-3">[UTC - 3] Amazon Standard Time, Central Greenland Time</option> <option value="-2">[UTC - 2] Fernando de Noronha Time, South Georgia &amp; the South Sandwich Islands Time</option> <option value="-1">[UTC - 1] Azores Standard Time, Cape Verde Time, Eastern Greenland Time</option> <option value="0" selected="selected">[UTC] Western European Time, Greenwich Mean Time</option> <option value="1">[UTC + 1] Central European Time, West African Time</option> <option value="2">[UTC + 2] Eastern European Time, Central African Time</option> <option value="3">[UTC + 3] Moscow Standard Time, Eastern African Time</option> <option value="3.5">[UTC + 3:30] Iran Standard Time</option> <option value="4">[UTC + 4] Gulf Standard Time, Samara Standard Time</option> <option value="4.5">[UTC + 4:30] Afghanistan Time</option> <option value="5">[UTC + 5] Pakistan Standard Time, Yekaterinburg Standard Time</option> <option value="5.5">[UTC + 5:30] Indian Standard Time, Sri Lanka Time</option> <option value="5.75">[UTC + 5:45] Nepal Time</option> <option value="6">[UTC + 6] Bangladesh Time, Bhutan Time, Novosibirsk Standard Time</option> <option value="6.5">[UTC + 6:30] Cocos Islands Time, Myanmar Time</option> <option value="7">[UTC + 7] Indochina Time, Krasnoyarsk Standard Time</option> <option value="8">[UTC + 8] Chinese Standard Time, Australian Western Standard Time, Irkutsk Standard Time</option> <option value="8.75">[UTC + 8:45] Southeastern Western Australia Standard Time</option> <option value="9">[UTC + 9] Japan Standard Time, Korea Standard Time, Chita Standard Time</option> <option value="9.5">[UTC + 9:30] Australian Central Standard Time</option> <option value="10">[UTC + 10] Australian Eastern Standard Time, Vladivostok Standard Time</option> <option value="10.5">[UTC + 10:30] Lord Howe Standard Time</option> <option value="11">[UTC + 11] Solomon Island Time, Magadan Standard Time</option> <option value="11.5">[UTC + 11:30] Norfolk Island Time</option> <option value="12">[UTC + 12] New Zealand Time, Fiji Time, Kamchatka Standard Time</option> <option value="12.75">[UTC + 12:45] Chatham Islands Time</option> <option value="13">[UTC + 13] Tonga Time, Phoenix Islands Time</option> <option value="14">[UTC + 14] Line Island Time</option> Or using PHP friendly values: <option value="Pacific/Midway">(GMT-11:00) Midway Island, Samoa</option> <option value="America/Adak">(GMT-10:00) Hawaii-Aleutian</option> <option value="Etc/GMT+10">(GMT-10:00) Hawaii</option> <option value="Pacific/Marquesas">(GMT-09:30) Marquesas Islands</option> <option value="Pacific/Gambier">(GMT-09:00) Gambier Islands</option> <option value="America/Anchorage">(GMT-09:00) Alaska</option> <option value="America/Ensenada">(GMT-08:00) Tijuana, Baja California</option> <option value="Etc/GMT+8">(GMT-08:00) Pitcairn Islands</option> <option value="America/Los_Angeles">(GMT-08:00) Pacific Time (US & Canada)</option> <option value="America/Denver">(GMT-07:00) Mountain Time (US & Canada)</option> <option value="America/Chihuahua">(GMT-07:00) Chihuahua, La Paz, Mazatlan</option> <option value="America/Dawson_Creek">(GMT-07:00) Arizona</option> <option value="America/Belize">(GMT-06:00) Saskatchewan, Central America</option> <option value="America/Cancun">(GMT-06:00) Guadalajara, Mexico City, Monterrey</option> <option value="Chile/EasterIsland">(GMT-06:00) Easter Island</option> <option value="America/Chicago">(GMT-06:00) Central Time (US & Canada)</option> <option value="America/New_York">(GMT-05:00) Eastern Time (US & Canada)</option> <option value="America/Havana">(GMT-05:00) Cuba</option> <option value="America/Bogota">(GMT-05:00) Bogota, Lima, Quito, Rio Branco</option> <option value="America/Caracas">(GMT-04:30) Caracas</option> <option value="America/Santiago">(GMT-04:00) Santiago</option> <option value="America/La_Paz">(GMT-04:00) La Paz</option> <option value="Atlantic/Stanley">(GMT-04:00) Faukland Islands</option> <option value="America/Campo_Grande">(GMT-04:00) Brazil</option> <option value="America/Goose_Bay">(GMT-04:00) Atlantic Time (Goose Bay)</option> <option value="America/Glace_Bay">(GMT-04:00) Atlantic Time (Canada)</option> <option value="America/St_Johns">(GMT-03:30) Newfoundland</option> <option value="America/Araguaina">(GMT-03:00) UTC-3</option> <option value="America/Montevideo">(GMT-03:00) Montevideo</option> <option value="America/Miquelon">(GMT-03:00) Miquelon, St. Pierre</option> <option value="America/Godthab">(GMT-03:00) Greenland</option> <option value="America/Argentina/Buenos_Aires">(GMT-03:00) Buenos Aires</option> <option value="America/Sao_Paulo">(GMT-03:00) Brasilia</option> <option value="America/Noronha">(GMT-02:00) Mid-Atlantic</option> <option value="Atlantic/Cape_Verde">(GMT-01:00) Cape Verde Is.</option> <option value="Atlantic/Azores">(GMT-01:00) Azores</option> <option value="Europe/Belfast">(GMT) Greenwich Mean Time : Belfast</option> <option value="Europe/Dublin">(GMT) Greenwich Mean Time : Dublin</option> <option value="Europe/Lisbon">(GMT) Greenwich Mean Time : Lisbon</option> <option value="Europe/London">(GMT) Greenwich Mean Time : London</option> <option value="Africa/Abidjan">(GMT) Monrovia, Reykjavik</option> <option value="Europe/Amsterdam">(GMT+01:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna</option> <option value="Europe/Belgrade">(GMT+01:00) Belgrade, Bratislava, Budapest, Ljubljana, Prague</option> <option value="Europe/Brussels">(GMT+01:00) Brussels, Copenhagen, Madrid, Paris</option> <option value="Africa/Algiers">(GMT+01:00) West Central Africa</option> <option value="Africa/Windhoek">(GMT+01:00) Windhoek</option> <option value="Asia/Beirut">(GMT+02:00) Beirut</option> <option value="Africa/Cairo">(GMT+02:00) Cairo</option> <option value="Asia/Gaza">(GMT+02:00) Gaza</option> <option value="Africa/Blantyre">(GMT+02:00) Harare, Pretoria</option> <option value="Asia/Jerusalem">(GMT+02:00) Jerusalem</option> <option value="Europe/Minsk">(GMT+02:00) Minsk</option> <option value="Asia/Damascus">(GMT+02:00) Syria</option> <option value="Europe/Moscow">(GMT+03:00) Moscow, St. Petersburg, Volgograd</option> <option value="Africa/Addis_Ababa">(GMT+03:00) Nairobi</option> <option value="Asia/Tehran">(GMT+03:30) Tehran</option> <option value="Asia/Dubai">(GMT+04:00) Abu Dhabi, Muscat</option> <option value="Asia/Yerevan">(GMT+04:00) Yerevan</option> <option value="Asia/Kabul">(GMT+04:30) Kabul</option> <option value="Asia/Yekaterinburg">(GMT+05:00) Ekaterinburg</option> <option value="Asia/Tashkent">(GMT+05:00) Tashkent</option> <option value="Asia/Kolkata">(GMT+05:30) Chennai, Kolkata, Mumbai, New Delhi</option> <option value="Asia/Katmandu">(GMT+05:45) Kathmandu</option> <option value="Asia/Dhaka">(GMT+06:00) Astana, Dhaka</option> <option value="Asia/Novosibirsk">(GMT+06:00) Novosibirsk</option> <option value="Asia/Rangoon">(GMT+06:30) Yangon (Rangoon)</option> <option value="Asia/Bangkok">(GMT+07:00) Bangkok, Hanoi, Jakarta</option> <option value="Asia/Krasnoyarsk">(GMT+07:00) Krasnoyarsk</option> <option value="Asia/Hong_Kong">(GMT+08:00) Beijing, Chongqing, Hong Kong, Urumqi</option> <option value="Asia/Irkutsk">(GMT+08:00) Irkutsk, Ulaan Bataar</option> <option value="Australia/Perth">(GMT+08:00) Perth</option> <option value="Australia/Eucla">(GMT+08:45) Eucla</option> <option value="Asia/Tokyo">(GMT+09:00) Osaka, Sapporo, Tokyo</option> <option value="Asia/Seoul">(GMT+09:00) Seoul</option> <option value="Asia/Yakutsk">(GMT+09:00) Yakutsk</option> <option value="Australia/Adelaide">(GMT+09:30) Adelaide</option> <option value="Australia/Darwin">(GMT+09:30) Darwin</option> <option value="Australia/Brisbane">(GMT+10:00) Brisbane</option> <option value="Australia/Hobart">(GMT+10:00) Hobart</option> <option value="Asia/Vladivostok">(GMT+10:00) Vladivostok</option> <option value="Australia/Lord_Howe">(GMT+10:30) Lord Howe Island</option> <option value="Etc/GMT-11">(GMT+11:00) Solomon Is., New Caledonia</option> <option value="Asia/Magadan">(GMT+11:00) Magadan</option> <option value="Pacific/Norfolk">(GMT+11:30) Norfolk Island</option> <option value="Asia/Anadyr">(GMT+12:00) Anadyr, Kamchatka</option> <option value="Pacific/Auckland">(GMT+12:00) Auckland, Wellington</option> <option value="Etc/GMT-12">(GMT+12:00) Fiji, Kamchatka, Marshall Is.</option> <option value="Pacific/Chatham">(GMT+12:45) Chatham Islands</option> <option value="Pacific/Tongatapu">(GMT+13:00) Nuku'alofa</option> <option value="Pacific/Kiritimati">(GMT+14:00) Kiritimati</option> Or just using PHP it's self $timezones = DateTimeZone::listAbbreviations(); $cities = array(); foreach( $timezones as $key => $zones ) { foreach( $zones as $id => $zone ) { /** * Only get timezones explicitely not part of "Others". * @see http://www.php.net/manual/en/timezones.others.php */ if ( preg_match( '/^(America|Antartica|Arctic|Asia|Atlantic|Europe|Indian|Pacific)\//', $zone['timezone_id'] ) && $zone['timezone_id']) { $cities[$zone['timezone_id']][] = $key; } } } // For each city, have a comma separated list of all possible timezones for that city. foreach( $cities as $key => $value ) $cities[$key] = join( ', ', $value); // Only keep one city (the first and also most important) for each set of possibilities. $cities = array_unique( $cities ); // Sort by area/city name. ksort( $cities ); It seems like the last one would be the safest as it would grow with the PHP release being used. You could also flip that array around when needed to tie timezones to city names.

    Read the article

  • SBS 2008 SP2 Backup - Volume Shadow Copy Operation Failed

    - by Robert Ortisi
    Server Setup Exchange 2007 Version: 08.03.0192.001 (Rollup 4) Windows Small Business Server 2008 SP2 (Rollup 5) Exchange set up on D: drive (449 GB / 698 GB Free) 80 GB / 148 GB Free on OS drive. Issue Backup Failure (VSS related) Backup Software Windows Server Backup (ver 1.0) Simplified Error Creation of the shared protection point timed out. Unknown error (0x81000101) The flush and hold writes operation on volume C: timed out while waiting for a release writes command. Volume Shadow Copy Warning: VSS spent 43 seconds trying to flush and hold the volume \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}. This might cause problems when other volumes in the shadow-copy set timeout waiting for the release-writes phase, and it can cause the shadow-copy creation to fail. Trying again when disk activity is lower may solve this problem. What I've tried Server Reboot. Updated Server and Exchange. ReConfigured Sharepoint (Helped resolve last vss error I encountered). registered VSS Dll's (Backups will sometimes work afterwards but VSS writers fail soon after). Tried Implementing Hotfix: http://support.microsoft.com/kb/956136 Tried Implementing Hotfix: http://support.microsoft.com/kb/972135 I left it for a few days and a few backups came through but then began to fail again. Detailed Information Log Name: Application Source: VSS Date: 16/11/2011 8:02:11 PM Event ID: 12341 Task Category: None Level: Warning Keywords: Classic User: N/A Computer: SERVER.DOMAIN.local Description: Volume Shadow Copy Warning: VSS spent 43 seconds trying to flush and hold the volume \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}. This might cause problems when other volumes in the shadow-copy set timeout waiting for the release-writes phase, and it can cause the shadow-copy creation to fail. Trying again when disk activity is lower may solve this problem. Operation: Executing Asynchronous Operation Context: Current State: flush-and-hold writes Volume Name: \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}\ Event Xml: 12341 3 0 0x80000000000000 1651049 Application SERVER.DOMAIN.local 43 \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}\ Operation: Executing Asynchronous Operation Context: Current State: flush-and-hold writes Volume Name: \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}\ ================================================================================= Log Name: System Source: volsnap Date: 16/11/2011 8:02:11 PM Event ID: 8 Task Category: None Level: Error Keywords: Classic User: N/A Computer: SERVER.DOMAIN.local Description: The flush and hold writes operation on volume C: timed out while waiting for a release writes command. Event Xml: 8 2 0 0x80000000000000 987135 System SERVER.DOMAIN.local ================================================================================== Log Name: Application Source: Microsoft-Windows-Backup Date: 16/11/2011 8:11:18 PM Event ID: 521 Task Category: None Level: Error Keywords: User: SYSTEM Computer: SERVER.DOMAIN.local Description: Backup started at '16/11/2011 9:00:35 AM' failed as Volume Shadow copy operation failed for backup volumes with following error code '2155348001'. Please rerun backup once issue is resolved. Event Xml: 521 0 2 0 0 0x8000000000000000 1651065 Application SERVER.DOMAIN.local 2011-11-16T09:00:35.446Z 2155348001 %%2155348001 ================================================================================== Writer name: 'FRS Writer' Writer Id: {d76f5a28-3092-4589-ba48-2958fb88ce29} Writer Instance Id: {ba047fc6-9ce8-44ba-b59f-f2f8c07708aa} State: [5] Waiting for completion Last error: No error Writer name: 'ASR Writer' Writer Id: {be000cbe-11fe-4426-9c58-531aa6355fc4} Writer Instance Id: {0aace3e2-c840-4572-bf49-7fcc3fbcf56d} State: [1] Stable Last error: No error Writer name: 'Shadow Copy Optimization Writer' Writer Id: {4dc3bdd4-ab48-4d07-adb0-3bee2926fd7f} Writer Instance Id: {054593e2-2086-4480-92e5-30386509ed1b} State: [1] Stable Last error: No error Writer name: 'Registry Writer' Writer Id: {afbab4a2-367d-4d15-a586-71dbb18f8485} Writer Instance Id: {840e6f5f-f35a-4b65-bb20-060cf2ee892a} State: [1] Stable Last error: No error Writer name: 'COM+ REGDB Writer' Writer Id: {542da469-d3e1-473c-9f4f-7847f01fc64f} Writer Instance Id: {9486bedc-f6e8-424b-b563-8b849d51b1e1} State: [1] Stable Last error: No error Writer name: 'BITS Writer' Writer Id: {4969d978-be47-48b0-b100-f328f07ac1e0} Writer Instance Id: {29368bb3-e04b-4404-8fc9-e62dae18da91} State: [1] Stable Last error: No error Writer name: 'Dhcp Jet Writer' Writer Id: {be9ac81e-3619-421f-920f-4c6fea9e93ad} Writer Instance Id: {cfb58c78-9609-4133-8fc8-f66b0d25e12d} State: [5] Waiting for completion Last error: No error ==================================================================================

    Read the article

  • eth0 and eth1 both assigned same IP on boot

    - by Banjer
    I have a physical SLES 11 SP2 server on a Sun Fire x4140 that is giving me problems with networking upon reboot. The NICs are onboard. The networking appears successful during boot, but network services such as nfs fail hard. This is because eth0 and eth1 are both receiving the same configuration and are both ifup-ed. Once everything times out and I'm at the console, ifconfig shows that eth0 and eth1 are UP and running with the same IP. Attempting to ping anything in that subnet fails. Restarting the network service fixes the issue. eth0 is the correct NIC that should be configured as primary, per the MAC address. Question: Whats causing eth1 to be brought up with the same config as eth0?? I do not have a config script set up for eth1: banjer@harp:~> ls -la /etc/sysconfig/network/ total 104 drwxr-xr-x 6 root root 4096 Jun 11 12:21 . drwxr-xr-x 6 root root 4096 Apr 10 09:46 .. -rw-r--r-- 1 root root 13916 Apr 10 09:32 config -rw-r--r-- 1 root root 9952 Apr 10 09:36 dhcp -rw------- 1 root root 180 Jun 11 12:21 ifcfg-eth0 -rw------- 1 root root 180 Jun 11 12:21 ifcfg-eth3 -rw------- 1 root root 172 Feb 1 08:32 ifcfg-lo -rw-r--r-- 1 root root 29333 Feb 1 08:32 ifcfg.template drwxr-xr-x 2 root root 4096 Apr 10 09:32 if-down.d -rw-r--r-- 1 root root 239 Feb 1 08:32 ifroute-lo drwxr-xr-x 2 root root 4096 Apr 10 09:33 if-up.d drwx------ 2 root root 4096 May 5 2010 providers -rw-r--r-- 1 root root 25 Nov 16 2010 routes drwxr-xr-x 2 root root 4096 Apr 10 09:36 scripts On a side note, eth3 is also configured with an IP in a different subnet, but this has not posed any problems. FYI the kernel module being used is forcedeth. banjer@harp:~> sudo cat /etc/sysconfig/network/ifcfg-eth0 BOOTPROTO='static' BROADCAST='' ETHTOOL_OPTIONS='' IPADDR='172.21.64.25/20' MTU='' NAME='MCP55 Ethernet' NETWORK='' REMOTE_IPADDR='' STARTMODE='auto' USERCONTROL='no' ONBOOT="yes" Here's eth3 in case you need to see it: banjer@harp:~> sudo cat /etc/sysconfig/network/ifcfg-eth3 BOOTPROTO='static' BROADCAST='' ETHTOOL_OPTIONS='' IPADDR='172.11.200.4/24' MTU='' NAME='MCP55 Ethernet' NETWORK='' REMOTE_IPADDR='' STARTMODE='auto' USERCONTROL='no' ONBOOT="yes" Perhaps is something related to udev? 70-persistent-net-rules looks OK to me, but I may not understand it completely. banjer@harp:~> cat /etc/udev/rules.d/70-persistent-net.rules # This file was automatically generated by the /lib/udev/write_net_rules # program, run by the persistent-net-generator.rules rules file. # # You can modify it, as long as you keep each rule on a single # line, and change only the value of the NAME= key. # PCI device 0x10de:0x0373 (forcedeth) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:18:4f:8d:85:4c", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth2" # PCI device 0x10de:0x0373 (forcedeth) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:18:4f:8d:85:4a", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0" # PCI device 0x10de:0x0373 (forcedeth) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:18:4f:8d:85:4b", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1" # PCI device 0x10de:0x0373 (forcedeth) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:18:4f:8d:85:4d", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth3" # PCI device 0x1077:0x3032 (qla3xxx) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:c1:dd:0e:34:6c", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth4" Any other thoughts on what would cause this?

    Read the article

  • iptable CLUSTERIP won't work

    - by Rad Akefirad
    We have some requirements which explained here. We tried to satisfy them without any success as described. Here is the brief information: Here are requirements: 1. High Availability 2. Load Balancing Current Configuration: Server #1: one static (real) IP for each 10.17.243.11 Server #2: one static (real) IP for each 10.17.243.12 Cluster (virtual and shared among all servers) IP: 10.17.243.15 I tried to use CLUSTERIP to have the cluster IP by the following: on the server #1 iptables -I INPUT -i eth0 -d 10.17.243.15 -j CLUSTERIP --new --hashmode sourceip --clustermac 01:00:5E:00:00:20 --total-nodes 2 --local-node 1 on the server #2 iptables -I INPUT -i eth0 -d 10.17.243.15 -j CLUSTERIP --new --hashmode sourceip --clustermac 01:00:5E:00:00:20 --total-nodes 2 --local-node 2 When we try to ping 10.17.243.15 there is no reply. And the web service (tomcat on port 8080) is not accessible either. However we managed to get the packets on both servers by using TCPDUMP. Some useful information: iptable roules (iptables -L -n -v): Chain INPUT (policy ACCEPT 21775 packets, 1470K bytes) pkts bytes target prot opt in out source destination 0 0 CLUSTERIP all -- eth0 * 0.0.0.0/0 10.17.243.15 CLUSTERIP hashmode=sourceip clustermac=01:00:5E:00:00:20 total_nodes=2 local_node=1 hash_init=0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 14078 packets, 44M bytes) pkts bytes target prot opt in out source destination Log messages: ... kernel: [ 7.329017] e1000e: eth3 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None ... kernel: [ 7.329133] e1000e 0000:05:00.0: eth3: 10/100 speed: disabling TSO ... kernel: [ 7.329567] ADDRCONF(NETDEV_CHANGE): eth3: link becomes ready ... kernel: [ 71.333285] ip_tables: (C) 2000-2006 Netfilter Core Team ... kernel: [ 71.341804] nf_conntrack version 0.5.0 (16384 buckets, 65536 max) ... kernel: [ 71.343168] ipt_CLUSTERIP: ClusterIP Version 0.8 loaded successfully ... kernel: [ 108.456043] device eth0 entered promiscuous mode ... kernel: [ 112.678859] device eth0 left promiscuous mode ... kernel: [ 117.916050] device eth0 entered promiscuous mode ... kernel: [ 140.168848] device eth0 left promiscuous mode TCPDUMP while pinging: tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 12:11:55.335528 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 10.17.243.1 > 10.17.243.15: ICMP echo request, id 16162, seq 2390, length 64 12:11:56.335778 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 10.17.243.1 > 10.17.243.15: ICMP echo request, id 16162, seq 2391, length 64 12:11:57.336010 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 10.17.243.1 > 10.17.243.15: ICMP echo request, id 16162, seq 2392, length 64 12:11:58.336287 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 10.17.243.1 > 10.17.243.15: ICMP echo request, id 16162, seq 2393, length 64 And there is no ping reply as I said. Does anyone know which part I missed? Thanks in advance.

    Read the article

  • dvd drive I/O error?

    - by bobby
    i get dis error evry time i burn a cd/dvd thru my dvd drive...!! Nero Burning ROM bobby 4C85-200E-4005-0004-0000-7660-0800-35X3-0000-407M-MX37-**** (*) Windows XP 6.1 IA32 WinAspi: - NT-SPTI used Nero Version: 7.11.3. Internal Version: 7, 11, 3, (Nero Express) Recorder: Version: UL01 - HA 1 TA 1 - 7.11.3.0 Adapter driver: HA 1 Drive buffer : 2048kB Bus Type : default CD-ROM: Version: 52PP - HA 1 TA 0 - 7.11.3.0 Adapter driver: HA 1 === Scsi-Device-Map === === CDRom-Device-Map === ATAPI-CD ROM-DRIVE-52MAX F: CdRom0 HL-DT-ST DVDRAM GSA-H12N G: CdRom1 ======================= AutoRun : 1 Excluded drive IDs: WriteBufferSize: 83886080 (0) Byte BUFE : 0 Physical memory : 958MB (981560kB) Free physical memory: 309MB (317024kB) Memory in use : 67 % Uncached PFiles: 0x0 Use Inquiry : 1 Global Bus Type: default (0) Check supported media : Disabled (0) 11.6.2010 CD Image 10:43:02 AM #1 Text 0 File SCSIPTICommands.cpp, Line 450 LockMCN - completed sucessfully for IOCTL_STORAGE_MCN_CONTROL 10:43:02 AM #2 Text 0 File Burncd.cpp, Line 3186 HL-DT-ST DVDRAM GSA-H12N Buffer underrun protection activated 10:43:02 AM #3 Text 0 File Burncd.cpp, Line 3500 Turn on Disc-At-Once, using CD-R/RW media 10:43:02 AM #4 Text 0 File DlgWaitCD.cpp, Line 307 Last possible write address on media: 359848 ( 79:59.73) Last address to be written: 318783 ( 70:52.33) 10:43:02 AM #5 Text 0 File DlgWaitCD.cpp, Line 319 Write in overburning mode: NO (enabled: CD) 10:43:02 AM #6 Text 0 File DlgWaitCD.cpp, Line 2988 Recorder: HL-DT-ST DVDRAM G SA-H12N; CDR co de: 00 97 27 18; O SJ entry from: Pla smon Data systems Ltd. ATIP Data: Special Info [hex] 1: D0 00 A0, 2: 61 1B 12 (LI 97:27.18), 3: 4F 3B 4A ( LO 79:59.74) Additional Info [hex] 1: 00 00 00 (invalid), 2: 00 00 00 (invalid), 3: 00 0 0 00 (invalid) 10:43:02 AM #7 Text 0 File DlgWaitCD.cpp, Line 493 Protocol of DlgWaitCD activities: TRM_DATA_MODE1, 2048, config 0, wanted index0 0 blocks, length 318784 blo cks [G: HL-DT-ST DVDRAM GSA-H12N] -------------------------------------------------------------- 10:43:02 AM #9 Text 0 File ThreadedTransferInterface.cpp, Line 986 Prepare [G: HL-DT-ST DVDRAM GSA-H12N] for write in CUE-sheet-DAO DAO infos: ========== MCN: "" TOCType: 0x00; Se ssion Clo sed, disc fixated Tracks 1 to 1: Idx 0 Idx 1 Next T rk 1: TRM_DATA_MODE1, 2048/0x00, FilePos 0 307200 6531768 32, ISRC "" DAO layout: =========== ___Start_|____Track_|_Idx_|_CtrlAdr_|_____Size_|______NWA_|_RecDep__________ -150 | lead-in | 0 | 0x41 | 0 | 0 | 0x00 -150 | 1 | 0 | 0x41 | 0 | 0 | 0x00 0 | 1 | 1 | 0x41 | 318784 | 318784 | 0x00 318784 | lead-out | 1 | 0x41 | 0 | 0 | 0x00 10:43:02 AM #10 Text 0 File SCSIPTICommands.cpp, Line 240 SPTILockVolume - completed successfully for FSCTL_LOCK_VOLUME 10:43:02 AM #11 Text 0 File Burncd.cpp, Line 4286 Caching options: cache CDRom or Network-Yes, small files-Yes ( ON 10:43:03 AM #19 Text 0 File MMC.cpp, Line 18034 CueData, Len=32 41 00 00 14 00 00 00 00 41 01 00 10 00 00 00 00 41 01 01 10 00 00 02 00 41 aa 01 14 00 46 34 22 10:43:03 AM #20 Text 0 File ThreadedTransfer.cpp, Line 268 Pipe memory size 83836800 10:43:16 AM #21 Text 0 File Cdrdrv.cpp, Line 1405 10:43:16.806 - G: HL-DT-ST DVDRAM GSA-H12N : Queue again later 10:43:42 AM #22 SPTI -1502 File SCSIPassThrough.cpp, Line 181 CdRom1: SCSIStatus(x02) WinError(0) NeroError(-1502) Sense Key: 0x04 (KEY_HARDWARE_ERROR) Nero Report 2 Nero Burning ROM Sense Code: 0x08 Sense Qual: 0x03 CDB Data: 0x2A 00 00 00 4D 00 00 00 20 00 00 00 Sense Area: 0x70 00 04 00 00 00 00 10 53 29 A1 80 08 03 Buffer x0c7d9a40: Len x10000 0xDC 87 EB 41 6E AC 61 5A 07 B2 DB 78 B5 D4 D9 24 0x8D BC 51 38 46 56 0F EE 16 15 5C 5B E3 B0 10 16 0x14 B1 C3 6E 30 2B C4 78 15 AB D5 92 09 B7 81 23 10:43:42 AM #23 CDR -1502 File Writer.cpp, Line 306 DMA-driver error, CRC error G: HL-DT-ST DVDRAM GSA-H12N 10:43:55 AM #24 Phase 38 File dlgbrnst.cpp, Line 1767 Burn process failed at 48x (7,200 KB/s) 10:43:55 AM #25 Text 0 File SCSIPTICommands.cpp, Line 287 SPTIDismountVolume - completed successfully for FSCTL_DISMOUNT_VOLUME 10:44:01 AM #26 Text 0 File Cdrdrv.cpp, Line 11412 DriveLocker: UnLockVolume completed 10:44:01 AM #27 Text 0 File SCSIPTICommands.cpp, Line 450 UnLockMCN - completed sucessfully for IOCTL_STORAGE_MCN_CONTROL Existing drivers: Registry Keys: HKLM\Software\Microsoft\Windows NT\CurrentVersion\WinLogon Nero Report 3

    Read the article

  • Configuring Wireless on Cisco 851W

    - by Aequitarum Custos
    Either a powersurge or something caused our router's configuration to get wiped, and our last backup was before the wireless network was setup. We have not been able to reconfigure the wireless since then, so was curious if anyone here would be able to determine what configuration is needed. We are using a Cisco 851W running 12.4(15)T9 We would like to use WPA encryption, and have it on the same network as the rest of the office network. Config file is below: User Access Verification Building configuration... Current configuration : 3857 bytes ! version 12.4 no service pad service timestamps debug datetime msec service timestamps log datetime msec service password-encryption no service dhcp ! hostname BOB ! boot-start-marker boot-end-marker ! enable secret 5 ********************* ! no aaa new-model ! ! dot11 syslog no ip source-route ! ! ip cef no ip bootp server ip domain name BOB.com ip name-server 61.11.1.1 ip name-server 61.11.1.2 ! ! ! username BOBB privilege 15 password 7 ************************* ! ! archive log config hidekeys ! ! ip tcp synwait-time 10 ! ! ! interface FastEthernet0 no cdp enable ! interface FastEthernet1 no cdp enable ! interface FastEthernet2 no cdp enable ! interface FastEthernet3 no cdp enable ! interface FastEthernet4 description WAN Connection$ETH-WAN$ ip address 61.11.1.14 255.255.254.0 ip nat outside ip virtual-reassembly duplex auto speed auto no cdp enable ! interface Dot11Radio0 no ip address shutdown ! encryption mode ciphers tkip speed basic-1.0 basic-2.0 basic-5.5 6.0 9.0 basic-11.0 12.0 18.0 24.0 36.0 48.0 54.0 station-role root no cdp enable ! interface Dot11Radio0.1 encapsulation dot1Q 1 native no cdp enable bridge-group 1 bridge-group 1 subscriber-loop-control bridge-group 1 spanning-disabled bridge-group 1 block-unknown-source no bridge-group 1 source-learning no bridge-group 1 unicast-flooding ! interface Dot11Radio0.20 ip access-group Guest-ACL in no cdp enable ! interface Vlan1 description Internal Network ip address 192.168.2.60 255.255.255.0 ip nat inside ip nat enable ip virtual-reassembly ! ip forward-protocol nd ip route 0.0.0.0 0.0.0.0 61.11.2.14 ! ip http server no ip http secure-server ip nat inside source list 1 interface FastEthernet4 overload ! ip access-list extended Guest-ACL deny ip any 192.0.0.0 0.0.0.255 permit ip any any ! access-list 1 permit 192.0.0.0 0.0.0.255 access-list 100 remark SDM_ACL Category=2 access-list 100 permit ip 192.0.0.0 0.0.0.255 any no cdp run ! control-plane ! !

    Read the article

  • conflict in debian packages

    - by Alaa Alomari
    I have Debian 4 server (i know it is very old) cat /etc/issue Debian GNU/Linux 4.0 \n \l I have the following in /etc/apt/sources.list deb http://debian.uchicago.edu/debian/ stable main deb http://ftp.debian.org/debian/ stable main deb-src http://ftp.debian.org/debian/ stable main deb http://security.debian.org/ stable/updates main apt-get upgrade Reading package lists... Done Building dependency tree... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies. libt1-5: Depends: libc6 (= 2.7) but 2.3.6.ds1-13etch10+b1 is installed locales: Depends: glibc-2.11-1 but it is not installable E: Unmet dependencies. Try using -f. Now it shows that i have Debian 6!! cat /etc/issue Debian GNU/Linux 6.0 \n \l EDIT I have tried apt-get update Get: 1 http://debian.uchicago.edu stable Release.gpg [1672B] Hit http://debian.uchicago.edu stable Release Ign http://debian.uchicago.edu stable/main Packages/DiffIndex Hit http://debian.uchicago.edu stable/main Packages Get: 2 http://security.debian.org stable/updates Release.gpg [836B] Hit http://security.debian.org stable/updates Release Get: 3 http://ftp.debian.org stable Release.gpg [1672B] Ign http://security.debian.org stable/updates/main Packages/DiffIndex Hit http://security.debian.org stable/updates/main Packages Hit http://ftp.debian.org stable Release Ign http://ftp.debian.org stable/main Packages/DiffIndex Ign http://ftp.debian.org stable/main Sources/DiffIndex Hit http://ftp.debian.org stable/main Packages Hit http://ftp.debian.org stable/main Sources Fetched 3B in 0s (3B/s) Reading package lists... Done apt-get dist-upgrade Reading package lists... Done Building dependency tree... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies. libt1-5: Depends: libc6 (= 2.7) but 2.3.6.ds1-13etch10+b1 is installed locales: Depends: glibc-2.11-1 E: Unmet dependencies. Try using -f. apt-get -f install Reading package lists... Done Building dependency tree... Done Correcting dependencies...Done The following extra packages will be installed: gcc-4.4-base libbsd-dev libbsd0 libc-bin libc-dev-bin libc6 Suggested packages: glibc-doc Recommended packages: libc6-i686 The following packages will be REMOVED libc6-dev libedit-dev libexpat1-dev libgcrypt11-dev libjpeg62-dev libmcal0-dev libmhash-dev libncurses5-dev libpam0g-dev libsablot0-dev libtool libttf-dev The following NEW packages will be installed gcc-4.4-base libbsd-dev libbsd0 libc-bin libc-dev-bin The following packages will be upgraded: libc6 1 upgraded, 5 newly installed, 12 to remove and 349 not upgraded. 7 not fully installed or removed. Need to get 0B/5050kB of archives. After unpacking 23.1MB disk space will be freed. Do you want to continue [Y/n]? y Preconfiguring packages ... dpkg: regarding .../libc-bin_2.11.3-2_i386.deb containing libc-bin: package uses Breaks; not supported in this dpkg dpkg: error processing /var/cache/apt/archives/libc-bin_2.11.3-2_i386.deb (--unpack): unsupported dependency problem - not installing libc-bin Errors were encountered while processing: /var/cache/apt/archives/libc-bin_2.11.3-2_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Now: it seems there is a conflict!! how can i fix it? and is it true that the server has became debian 6!!?? Thanks for your help

    Read the article

  • Linux Kernel not passing through multicast UDP packets

    - by buecking
    Recently I've set up a new Ubuntu Server 10.04 and noticed my UDP server is no longer able to see any multicast data sent to the interface, even after joining the multicast group. I've got the exact same set up on two other Ubuntu 8.04.4 LTS machines and there is no problem receiving data after joining the same multicast group. The ethernet card is a Broadcom netXtreme II BCM5709 and the driver used is: b $ ethtool -i eth1 driver: bnx2 version: 2.0.2 firmware-version: 5.0.11 NCSI 2.0.5 bus-info: 0000:01:00.1 I'm using smcroute to manage my multicast registrations. b$ smcroute -d b$ smcroute -j eth1 233.37.54.71 After joining the group ip maddr shows the newly added registration. b$ ip maddr 1: lo inet 224.0.0.1 inet6 ff02::1 2: eth0 link 33:33:ff:40:c6:ad link 01:00:5e:00:00:01 link 33:33:00:00:00:01 inet 224.0.0.1 inet6 ff02::1:ff40:c6ad inet6 ff02::1 3: eth1 link 01:00:5e:25:36:47 link 01:00:5e:25:36:3e link 01:00:5e:25:36:3d link 33:33:ff:40:c6:af link 01:00:5e:00:00:01 link 33:33:00:00:00:01 inet 233.37.54.71 <------- McastGroup. inet 224.0.0.1 inet6 ff02::1:ff40:c6af inet6 ff02::1 So far so good, I can see that I'm receiving data for this multicast group. b$ sudo tcpdump -i eth1 -s 65534 host 233.37.54.71 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 65534 bytes 09:30:09.924337 IP 192.164.1.120.58848 > 233.37.54.71.15572: UDP, length 212 09:30:09.947547 IP 192.164.1.120.58848 > 233.37.54.71.15572: UDP, length 212 09:30:10.108378 IP 192.164.1.120.58866 > 233.37.54.71.15574: UDP, length 268 09:30:10.196841 IP 192.164.1.120.58848 > 233.37.54.71.15572: UDP, length 212 ... I can also confirm that the interface is receiving mcast packets. b $ ethtool -S eth1 | grep mcast_pack rx_mcast_packets: 103998 tx_mcast_packets: 33 Now here's the problem. When I try to capture the traffic using a simple ruby UDP server I receive zero data! Here's a simple server that reads data send on port 15572 and prints the first two characters. This works on the two 8.04.4 Ubuntu Servers, but not the 10.04 server. require 'socket' s = UDPSocket.new s.bind("", 15572) 5.times do text, sender = s.recvfrom(2) puts text end If I send a UDP packet crafted in ruby to localhost, the server receives it and prints out the first two characters. So I know that the server above is working correctly. irb(main):001:0> require 'socket' => true irb(main):002:0> s = UDPSocket.new => #<UDPSocket:0x7f3ccd6615f0> irb(main):003:0> s.send("I2 XXX", 0, 'localhost', 15572) When I check the protocol statistics I see that InMcastPkts is not increasing. While on the other 8.04 servers, on the same network, received a few thousands packets in 10 seconds. b $ netstat -sgu ; sleep 10 ; netstat -sgu IcmpMsg: InType3: 11 OutType3: 11 Udp: 446 packets received 4 packets to unknown port received. 0 packet receive errors 461 packets sent UdpLite: IpExt: InMcastPkts: 4654 <--------- Same as below OutMcastPkts: 3426 InBcastPkts: 9854 InOctets: -1691733021 OutOctets: 51187936 InMcastOctets: 145207 OutMcastOctets: 109680 InBcastOctets: 1246341 IcmpMsg: InType3: 11 OutType3: 11 Udp: 446 packets received 4 packets to unknown port received. 0 packet receive errors 461 packets sent UdpLite: IpExt: InMcastPkts: 4656 <-------------- Same as above OutMcastPkts: 3427 InBcastPkts: 9854 InOctets: -1690886265 OutOctets: 51188788 InMcastOctets: 145267 OutMcastOctets: 109712 InBcastOctets: 1246341 If I try forcing the interface into promisc mode nothing changes. At this point I'm stuck. I've confirmed the kernel config has multicast enabled. Perhaps there are other config options I should be checking? b $ grep CONFIG_IP_MULTICAST /boot/config-2.6.32-23-server CONFIG_IP_MULTICAST=y Any thoughts on where to go from here?

    Read the article

  • Protecting Apache with Fail2Ban

    - by NetStudent
    Having checked my Apache logs for the last two days I have noticed several attempts to access URLs such as /phpmyadmin, /phpldapadmin: 121.14.241.135 - - [09/Jun/2012:04:37:35 +0100] "GET /w00tw00t.at.blackhats.romanian.anti-sec:) HTTP/1.1" 404 415 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:35 +0100] "GET /phpMyAdmin/scripts/setup.php HTTP/1.1" 404 405 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:35 +0100] "GET /phpmyadmin/scripts/setup.php HTTP/1.1" 404 404 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:36 +0100] "GET /pma/scripts/setup.php HTTP/1.1" 404 399 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:36 +0100] "GET /myadmin/scripts/setup.php HTTP/1.1" 404 403 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:37 +0100] "GET /MyAdmin/scripts/setup.php HTTP/1.1" 404 403 "-" "ZmEu" 66.249.72.235 - - [09/Jun/2012:07:11:06 +0100] "GET /robots.txt HTTP/1.1" 404 430 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.72.235 - - [09/Jun/2012:07:11:06 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 188.132.178.34 - - [09/Jun/2012:08:39:05 +0100] "HEAD /manager/html HTTP/1.0" 404 166 "-" "-" 95.108.150.235 - - [09/Jun/2012:09:42:09 +0100] "GET /robots.txt HTTP/1.1" 404 432 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:09 +0100] "GET /robots.txt HTTP/1.1" 404 432 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:10 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:10 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:11 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:11 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 194.128.132.2 - - [09/Jun/2012:16:04:41 +0100] "HEAD / HTTP/1.0" 200 260 "-" "-" 66.249.68.176 - - [09/Jun/2012:18:08:12 +0100] "GET /robots.txt HTTP/1.1" 404 430 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.68.176 - - [09/Jun/2012:18:08:13 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 212.3.106.249 - - [09/Jun/2012:18:12:33 +0100] "GET / HTTP/1.1" 200 388 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:34 +0100] "GET /phpldapadmin/ HTTP/1.1" 404 379 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:34 +0100] "GET /phpldapadmin/htdocs/ HTTP/1.1" 404 386 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:35 +0100] "GET /phpldap/ HTTP/1.1" 404 374 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:36 +0100] "GET /phpldap/htdocs/ HTTP/1.1" 404 381 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:36 +0100] "GET /admin/ HTTP/1.1" 404 372 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:38 +0100] "GET /admin/ldap/ HTTP/1.1" 404 377 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:38 +0100] "GET /admin/ldap/htdocs/ HTTP/1.1" 404 384 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:38 +0100] "GET /admin/phpldap/ HTTP/1.1" 404 380 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:39 +0100] "GET /admin/phpldap/htdocs/ HTTP/1.1" 404 387 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:39 +0100] "GET /admin/phpldapadmin/htdocs/ HTTP/1.1" 404 392 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:40 +0100] "GET /admin/phpldapadmin/ HTTP/1.1" 404 385 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:40 +0100] "GET /openldap HTTP/1.1" 404 374 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:41 +0100] "GET /openldap/htdocs HTTP/1.1" 404 381 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:42 +0100] "GET /openldap/htdocs/ HTTP/1.1" 404 382 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:44 +0100] "GET /ldap/ HTTP/1.1" 404 371 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:44 +0100] "GET /ldap/htdocs/ HTTP/1.1" 404 378 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:45 +0100] "GET /ldap/phpldapadmin/ HTTP/1.1" 404 384 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:46 +0100] "GET /ldap/phpldapadmin/htdocs/ HTTP/1.1" 404 391 "-" "-" Is there any way I can use Fail2Ban or any other similar software to ban these IPs in situations when my server is being abused this way (by trying several "common" URLs)?

    Read the article

  • Microsoft ReportViewer SetParameters continuous refresh issue

    - by Ilya Verbitskiy
    Originally posted on: http://geekswithblogs.net/ilich/archive/2013/10/16/microsoft-reportviewer-setparameters-continuous-refresh-issue.aspxI am a big fun of using ASP.NET MVC for building web-applications. It allows us to create simple, robust and testable solutions. However, .NET world is not perfect. There is tons of code written in ASP.NET web-forms. You cannot simply ignore it, even if you want to. Sometimes ASP.NET web-forms controls bring us non-obvious issues. The good example is Microsoft ReportViewer control. I have an example for you. 1: <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %> 2: <%@ Register Assembly="Microsoft.ReportViewer.WebForms, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91" Namespace="Microsoft.Reporting.WebForms" TagPrefix="rsweb" %> 3:   4: <!DOCTYPE html> 5:   6: <html xmlns="http://www.w3.org/1999/xhtml"> 7: <head runat="server"> 8: <title>Report Viewer Continiuse Resfresh Issue Example</title> 9: </head> 10: <body> 11: <form id="form1" runat="server"> 12: <div> 13: <asp:ScriptManager runat="server"></asp:ScriptManager> 14: <rsweb:ReportViewer ID="_reportViewer" runat="server" Width="100%" Height="100%"></rsweb:ReportViewer> 15: </div> 16: </form> 17: </body> 18: </html>   The back-end code is simple as well. I want to show a report with some parameters to a user. 1: protected void Page_Load(object sender, EventArgs e) 2: { 3: _reportViewer.ProcessingMode = ProcessingMode.Remote; 4: _reportViewer.ShowParameterPrompts = false; 5:   6: var serverReport = _reportViewer.ServerReport; 7: serverReport.ReportServerUrl = new Uri("http://localhost/ReportServer_SQLEXPRESS"); 8: serverReport.ReportPath = "/Reports/TestReport"; 9:   10: var reportParameter1 = new ReportParameter("Parameter1"); 11: reportParameter1.Values.Add("Hello World!"); 12:   13: var reportParameter2 = new ReportParameter("Parameter2"); 14: reportParameter2.Values.Add("10/16/2013"); 15:   16: var reportParameter3 = new ReportParameter("Parameter3"); 17: reportParameter3.Values.Add("10"); 18:   19: serverReport.SetParameters(new[] { reportParameter1, reportParameter2, reportParameter3 }); 20: }   I set ShowParametersPrompts to false because I do not want user to refine the search. It looks good until you run the report. The report will refresh itself all the time. The problem caused by ServerReport.SetParameters method in Page_Load. The method cause ReportViewer control to execute the report on the NEXT post back. That is why the page has continuous post-backs. The fix is very simple: do nothing if Page_Load method executed during post-back. 1: protected void Page_Load(object sender, EventArgs e) 2: { 3: if (IsPostBack) 4: { 5: return; 6: } 7:   8: _reportViewer.ProcessingMode = ProcessingMode.Remote; 9: _reportViewer.ShowParameterPrompts = false; 10:   11: var serverReport = _reportViewer.ServerReport; 12: serverReport.ReportServerUrl = new Uri("http://localhost/ReportServer_SQLEXPRESS"); 13: serverReport.ReportPath = "/Reports/TestReport"; 14:   15: var reportParameter1 = new ReportParameter("Parameter1"); 16: reportParameter1.Values.Add("Hello World!"); 17:   18: var reportParameter2 = new ReportParameter("Parameter2"); 19: reportParameter2.Values.Add("10/16/2013"); 20:   21: var reportParameter3 = new ReportParameter("Parameter3"); 22: reportParameter3.Values.Add("10"); 23:   24: serverReport.SetParameters(new[] { reportParameter1, reportParameter2, reportParameter3 }); 25: } You can download sample code from GitHub - https://github.com/ilich/Examples/tree/master/ReportViewerContinuousRefresh

    Read the article

  • Evaluating Oracle Data Mining Has Never Been Easier - Evaluation "Kit" Available

    - by chberger
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Now you can quickly and easily get set up to starting using Oracle Data Mining for evaluation purposes. Just go to the Oracle Technology Network (OTN) and follow these simple steps. Oracle Data Mining Evaluation "Kit" Instructions Step 1: Download and Install the Oracle Database 11g Release 2 Anyone can download and install the Oracle Database for free for evaluation purposes. Read OTN web site for details. 11.2.0.1.0 DB is the minimum, 11.2.0.2 is better and naturally 11.2.0.3 is best if you are a current customer and on active support. Either 32-bit or 64-bit is fine. 4GB of RAM or more works fine for SQL Developer and the Oracle Data Miner GUI extension. Downloading the database and installing it should take just about an hour or so, depending on your network and computer. For more instructions on setting up Oracle Data Mining see: http://www.oracle.com/technetwork/database/options/odm/dataminerworkflow-168677.html When you install the Oracle Database, the Sample Examples data should also be installed e.g.:Release 2 Examples win32_11gR2_examples.zip (565,154,740 bytes). Contains examples of how to use the Oracle Database. Download if you are new to Oracle and want to try some of the examples presented in the Documentation Step 2: Install SQL Developer 3.1 (the Oracle Data Mining Extension installs automatically) Step 3. Follow the four free step-by-step Oracle-by-Examples e-training lessons: Setting Up Oracle Data Miner 11g Release 2 This tutorial covers the process of setting up Oracle Data Miner 11g Release 2 for use within Oracle SQL Developer 3.0. Using Oracle Data Miner 11g Release 2 This tutorial covers the use of Oracle Data Miner to perform data mining against Oracle Database 11g Release 2. In this lesson, you examine and solve a data mining business problem by using the Oracle Data Miner graphical user interface (GUI). Star Schema Mining Using Oracle Data Miner This tutorial covers the use of Oracle Data Miner to perform star schema mining against Oracle Database 11g Release 2. Text Mining Using Oracle Data Miner This tutorial covers the use of Oracle Data Miner to perform text mining against Oracle Database 11g Release 2. That’s it! Easy, fun and the fastest way to get started evaluating Oracle Data Mining. Enjoy! Charlie

    Read the article

  • BI&EPM in Focus June 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    General News Thomas Kurian Discusses Oracle Exalytics, SAP HANA (replay | preso | press)  Accenture & Oracle Study: The Challenges of Corporate Financial Reporting  (link) Flash Demo: Oracle Hyperion Planning on Exalytics in the Public Sector (link) Flash Demo: OBIEE & Exalytics in Retail (link) Customers Italian Partner Alfa Sistemi implemented at Autovie Venete S.p.A. Integrates Business Intelligence and Performance Management to Improve Efficiency and Speed for Managing Public Works Projects (English version)  / Autovie Venete implementa un sistema integrato di Business Intelligence e Performance Management per migliorare l’efficienza e la tempestività dell’attività di Controlling di Commessa (Italian version). FANCL Gains 360-Degree View of Customers across Multiple Sales Channels, Reduces Reports by 75% Korea Yakult Improves Profit & Loss Analysis with Oracle Hyperion Planning and OBIEE Hill International Streamlines Forecasting, Improves Visibility into Project Productivity and Profitability Children’s Rights in Society Better Supports Organizational Mission with Advanced, Integrated, and Streamlined Business Intelligence Tools Profit: International utility Enel monitors the performance of global subsidiaries with Oracle Hyperion Applications (link) Profit: Charting a New Course: Korean Air gains altitude by leveraging its greatest asset: information (link)   Events June 12: Breaking Away from the Excel Add-In: Welcome to Hyperion Smart View 11.1.2.2 (link) June 13: Upgrading OBIEE 10g to 11g: Best Practices and Lessons Learned (performance architects) (link) June 14, The Netherlands: Strategies for Business Excellence, New Release of Oracle Hyperion EPM Suite (link) June 21: Comprehensive and Accurate Forecasting for Healthcare (link) June 26: What Exactly is Exalytics? (KPI Partners) (link) Webcast Replay: Is Your Company Able to Navigate Through Market Volatility? (link)  Webcast Replay: Is Hope and Email The Core of Your Reconciliation Process? (link) Webcast Replay: Troubleshooting EPM Reporting & Analysis 11.1.2.x  (link) Webcast Replay: Is your Organization Flying Blind when it comes to Understanding Profitability?  (link) Enterprise Performance Management Final Oracle EPM  Information Panel (CIP) survey on cost, profitability and performance reporting/scorecards is now OPEN (link) New on EPM Blog: What's Going on With IFRS? (link) How does Crystal Ball integrate with EPM Solutions? New collateral and demos on Crystal Ball Solution Factory!  (link) New Youtube Video: Business Case Analysis with Oracle Crystal Ball (link) Crystal Ball 11.1.2.2 is released! Grouped Assumptions in Sensitivity Charts, Data Filtering When Fitting Distributions and Parameter Edits When Fitting Distributions to name a few. Get full details from the online New Features Guide (link) New DRM Oracle-by-Examples now available (link) Support Blog: Hyperion Ledgerlink Sample Record and Windows 7: Now you see it, now you don’t  (link) Use Enterprise Manager FMW Control to Troubleshoot Oracle EPM 11.1.2 Family of Products (link) Business  Intelligence Whitepaper: Real-Time Operational Reporting for E-Business Suite via GoldenGate Replication to an Operational Data Store.  How Oracle enabled real-time operational reporting for its $20B services contract business with Golden Gate & OBIEE (link) KPI Partners ebook: Understanding Oracle BI Components and Repository Modeling Basics (link) “Getting Started with Oracle Endeca Information Discovery” video tutorials now available (link) Oracle BI Publisher Conversion Center: Convert from Crystal, Actuate, or Oracle Reports to Oracle BI Publisher (link) Oracle Fusion Applications: Monthly Partner Updates Webcast Replays to help BI partners understand how OBI, Essbase, BI-Apps and Fusion work together: More on Fusion CRM: Fusion Marketing More on Fusion CRM: Fusion CRM Sales Start-Up Packs and Expert Services for Implementation Partners Introducing the Oracle Fusion Accounting Hub Implementing Fusion Applications using Oracle's Composers Oracle Fusion Applications Co-Existence

    Read the article

  • Eclipse Indigo very slow on Kubuntu 12.04

    - by herom
    hello fellow ubuntu users! I have a really big problem with my Eclipse Indigo running on Kubuntu 12.04 32bit, Dell Vostro 3500, Intel(R) Core(TM) i5 CPU M480 @ 2.67 (as cat /proc/cpuinfo said). It has 4GB RAM. cat /proc/cpuinfo brings up the following: processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 37 model name : Intel(R) Core(TM) i5 CPU M 480 @ 2.67GHz stepping : 5 microcode : 0x2 cpu MHz : 1197.000 cache size : 3072 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt lahf_lm ida arat dts tpr_shadow vnmi flexpriority ept vpid bogomips : 5319.85 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 37 model name : Intel(R) Core(TM) i5 CPU M 480 @ 2.67GHz stepping : 5 microcode : 0x2 cpu MHz : 1197.000 cache size : 3072 KB physical id : 0 siblings : 4 core id : 2 cpu cores : 2 apicid : 4 initial apicid : 4 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt lahf_lm ida arat dts tpr_shadow vnmi flexpriority ept vpid bogomips : 5319.88 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 2 vendor_id : GenuineIntel cpu family : 6 model : 37 model name : Intel(R) Core(TM) i5 CPU M 480 @ 2.67GHz stepping : 5 microcode : 0x2 cpu MHz : 1197.000 cache size : 3072 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 apicid : 1 initial apicid : 1 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt lahf_lm ida arat dts tpr_shadow vnmi flexpriority ept vpid bogomips : 5319.88 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 3 vendor_id : GenuineIntel cpu family : 6 model : 37 model name : Intel(R) Core(TM) i5 CPU M 480 @ 2.67GHz stepping : 5 microcode : 0x2 cpu MHz : 1197.000 cache size : 3072 KB physical id : 0 siblings : 4 core id : 2 cpu cores : 2 apicid : 5 initial apicid : 5 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt lahf_lm ida arat dts tpr_shadow vnmi flexpriority ept vpid bogomips : 5319.88 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: java -version brings the following: java version "1.7.0_04" Java(TM) SE Runtime Environment (build 1.7.0_04-b20) Java HotSpot(TM) Server VM (build 23.0-b21, mixed mode) it's the Oracle Java, not OpenJDK. I try to develop an Android application for GoogleTV and Eclipse is this slow, that it can't follow my typing (extreme lagging!!), but this issue makes it almost impossible! here is my eclipse.ini file: -startup plugins/org.eclipse.equinox.launcher_1.2.0.v20110502.jar --launcher.library plugins/org.eclipse.equinox.launcher.gtk.linux.x86_1.1.100.v20110505 -product org.eclipse.epp.package.java.product --launcher.defaultAction openFile -showsplash org.eclipse.platform --launcher.XXMaxPermSize 512m --launcher.defaultAction openFile -vmargs -Dosgi.requiredJavaVersion=1.5 -Declipse.p2.unsignedPolicy=allow -Xms256m -Xmx512m -Xss4m -XX:PermSize=128m -XX:MaxPermSize=384m -XX:CompileThreshold=5 -XX:MaxGCPauseMillis=10 -XX:MaxHeapFreeRatio=70 -XX:+CMSIncrementalPacing -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC -XX:+UseFastAccessorMethods -XX:ReservedCodeCacheSize=64m -Dcom.sun.management.jmxremote has anybody faced the same problems? can anybody help me on this problem? it's really urgent as I'm sitting here at my company and am not able to do anything productive...

    Read the article

  • BI&EPM in Focus Sep 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    Customers ·       Iluka Resources Improves Business Insight into Mining Operations Through Significantly Faster, Customized Analyses ·       Waikato Regional Council Consolidates Financial Reports up to 10 Times Faster  ·       Lojas Renner Shortens Budget Consolidation from Three Days to 15 Minutes; Improves Data Quality, and Supports Aggressive Expansion Plans  ·       Link to Complete Archive ·       Profit Magazine article featuring General Dynamics: RECONnomics: Integrate. Innovate. Grow (link) ·       Video: Goodhope Asia Unifies Financial Data with Oracle Hyperion (link)   Enterprise Performance Management ·       Oracle University Training on Demand: 1.     Oracle Hyperion Financial Reporting 11.1.2 for Financial Management (link) 2.     Oracle Essbase Bootcamp: On Demand (link) 3.     Oracle Hyperion Planning 11.1.2: Create & Manage Applications On Demand (link)   Business Intelligence ·       Oracle University Training on Demand: 1.     Learn How to Create Analyses, Dashboards with OBI 11g (link) 2.     Build Repositories with Oracle Business Intelligence 11g (link) 3.     Oracle BI Publisher 11g Fundamentals (link) 4.     Oracle BI Applications Courses now available for 7.9.6  (link) 5.     Oracle BI 11g: New Features and Exalytics ·       Oracle Business Intelligence Release 11.1.1.6.2BP, updated information: 1.     Oracle BI Mobile at the Speed of Thought 2.     What's New in Oracle Business Intelligence Mobile 3.     What's New in Oracle Business Intelligence Visualizations 4.     Oracle Business Intelligence on Oracle.com 5.     Download the New Release ·       Discover How to Turn Data into Insight: Big Data Guide : Whitepaper and set of short Videos. ·       New OPN Specialisation Exam for OBI 11g Certification . ·       Lastest BIC2g and Exalytics Demonstration VMs for Partners . ·       New Version 2.3 Oracle Endeca Information Discovery and Server now available . ·       New Oracle BI Publisher 11.1.1.6 Trial Edition Now Available .

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >