Search Results

Search found 15698 results on 628 pages for 'keep alive'.

Page 121/628 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • Sticky connection and HTTPS support for HAProxy

    - by Saif
    Hi Mates, We have 2 HTTP Load balancer with HAproxy and heartbeat. There are 4 apache nodes in this cluster. It's doing round robin load balancing. The HTTP cluster working fine. We are having problem with our portal because it uses SSO. We need sticky connection support in our HAproxy. Also we need load balancing for HTTPS traffic. Here's our HAproxy conf file. global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local0 log 127.0.0.1 local1 notice chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- # main frontend which proxys to the backends #--------------------------------------------------------------------- frontend main *:5000 acl url_static path_beg -i /static /images /javascript /stylesheets acl url_static path_end -i .jpg .gif .png .css .js use_backend static if url_static default_backend app #--------------------------------------------------------------------- # static backend for serving up images, stylesheets and such #--------------------------------------------------------------------- backend static balance roundrobin server static 127.0.0.1:4331 check #--------------------------------------------------------------------- # round robin balancing between the various backends #--------------------------------------------------------------------- backend app listen ha-http 10.190.1.28:80 mode http stats enable stats auth admin:xxxxxx balance roundrobin cookie JSESSIONID prefix option httpclose option forwardfor option httpchk HEAD /haproxy.txt HTTP/1.0 server apache1 portal-04:80 cookie A check server apache2 im-01:80 cookie B check server apache3 im-02:80 cookie B check server apache4 im-03:80 cookie B check Please advice. Thanks for your help in advance.

    Read the article

  • Accidentally deleted symlink libc.so.6 in CentOS 6.4. How to get sudo privilege to re-create it?

    - by Eric
    I accidentally deleted the symbol link /lib64/libc.so.6 - /lib64/libc-2.12.so with $ sudo rm libc.so.6 Then I can not use anything including ls command. The error appears for any command I type ls: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory I've tried $ export LD_PRELOAD=/lib64/libc-2.12.so After this I can use ls and ln ..., but still can not use sudo ln ..., sudo -E ln ..., sudo su or even su. I always get this err sudo: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory or su: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory It seems LD_PRELOAD works only for the current shell session of my account, but not for a new account like root or a new session. It's a remote server so I can not use a live CD. I now have a ssh bash session alive but can not establish new ones. I have sudo privilege, but don't have root password. So currently my problem is I need to run sudo sln -s libc-2.12.so libc.so.6 to re-create the symlink libc.so.6, but I can not run sudo without libc.so.6. How can I fix it? Thanks~

    Read the article

  • how to get this row count for jquery grid..

    - by kumar
    I used this code to get the count of records in the jquery grid var numRows = jQuery("#mygrid").jqGrid ('getGridParam', 'records'); when i place anywhere in my view after or before grid?? am allways geting 0 result.. bec its allways taking before grid loading.. i need to place this code where i need to check after grid loading.. if i put something like this. alert("hello"); var numRows = jQuery("#mygrid").jqGrid ('getGridParam', 'records'); alert(numRows); first if i keep any alert message and then if i count i am getting the number of records.. but if i give directly this code var numRows = jQuery("#mygrid").jqGrid ('getGridParam', 'records'); alert(numRows); i am getting out put as 0.. i dont know why its behaving like this.. if we keep first alert box anywhere for second alert box i am getting rowcounts.. can anybody help me out .. thanks

    Read the article

  • iSCSI, failover and XenServer

    - by jemmille
    I have an iSCSI fail over implementation setup so if one of my storage units fails the other takes over immediately (it also runs the NFS shares). When fail over occurs, volumes are exported, the IP is switched to the other machine and the targets are reconfigured. The fail over of the storage system itself works just fine. I use NexentaStor for my filer. When I do a test (manual) fail over of my storage the following occurs: Note: I run the admin VM's on NFS and customer based VM's on iSCSI All NFS based VM's remain up and working perfectly through the failover and after All VM 's running on iSCSI eventually report the following: An error about not being able to write to a particular block An error about journaling not working Then the file system goes RO To get the VM's working again I have to do the following: Force shutdown of the "broken" VM's. Detach the iSCSI SR Re-attach the iSCSI SR Boot the VM on a different server (5 in my pool) If I don't boot on a different server I get this error "Internal error: Failure("The VDI <uuid&gt; is already attached in RW mode; it can't be attached in RO mode!")" The only way I have found to fix that error is to reboot the entire server it was running on previously which is obviously a huge pain. Currently multipathing is NOT enabled (but can be and the same thing still occurs). I have edited much of the /etc/iscsid.conf file to work with the timeout settings but to no avail. In short, my storage fails over properly but XenServer does not keep the connection alive. As a thought, the error that shows up in #4 above might be the ultimate cause and fixing that would fix everything? Any help would be appreciated more than you know.

    Read the article

  • feedparser - various errors

    - by Eiriks
    I need feedparser (se http://www.feedparser.org) for a project, and want to keep third party modules in a separate folder. I did this by adding a folder to my python path, and putting relevant modules there, among them feedparser. This first attempt to import feedparser resulted in import feedparser Traceback (most recent call last): File "", line 1, in File "/home/users/me/modules/feedparser.py", line 1 ed socket timeout; added support for chardet library ^ SyntaxError: invalid syntax I found the text "socket timeout; added..." in the comments at the bottom of the file, removed these comments, and tried again: import feedparser Traceback (most recent call last): File "", line 1, in File "/home/users/me/modules/feedparser.py", line 1 = [(key, value) for key, value in attrs if key in self.acceptable_attributes] ^ IndentationError: unexpected indent Ok, so some indent error. I made sure the indent in the function in question where ok (moved some line breaks down to no-indent). And tried again: import feedparser Traceback (most recent call last): File "", line 1, in File "/home/users/me/modules/feedparser.py", line 1 , value) for key, value in attrs if key in self.acceptable_attributes] ^ SyntaxError: invalid syntax As much I google, I cannot find anything wrong with the syntax: def unknown_starttag(self, tag, attrs): if not tag in self.acceptable_elements: if tag in self.unacceptable_elements_with_end_tag: self.unacceptablestack += 1 return attrs = self.normalize_attrs(attrs) attrs = [(key, value) for key, value in attrs if key in self.acceptable_attributes] _BaseHTMLProcessor.unknown_starttag(self, tag, attrs) Now what? Is my approach all wrong? Why do I keep producing these errors in a module that seems so well tested and trusted?

    Read the article

  • How to handle payment types with varying properties in the most elegant way.

    - by Byron Sommardahl
    I'm using ASP.NET MVC 2. Keeping it simple, I have three payment types: credit card, e-check, or "bill me later". I want to: choose one payment type display some fields for one payment type in my view run some logic using those fields (specific to the type) display a confirmation view run some more logic using those fields (specific to the type) display a receipt view Each payment type has fields specific to the type... maybe 2 fields, maybe more. For now, I know how many and what fields, but more could be added. I believe the best thing for my views is to have a partial view per payment type to handle the different fields and let the controller decide which partial to render (if you have a better option, I'm open). My real problem comes from the logic that happens in the controller between views. Each payment type has a variable number of fields. I'd like to keep everything strongly typed, but it feels like some sort of dictionary is the only option. Add to that specific logic that runs depending on the payment type. In an effort to keep things strongly typed, I've created a class for each payment type. No interface or inherited type since the fields are different per payment type. Then, I've got a Submit() method for each payment type. Then, while the controller is deciding which partial view to display, it also assigns the target of the submit action. This is not elegant solution and feels very wrong. I'm reaching out for a hand. How would you do this?

    Read the article

  • Suggestions for doing async I/O with Task Parallel Library

    - by anelson
    I have some high performance file transfer code which I wrote in C# using the Async Programming Model (APM) idiom (eg, BeginRead/EndRead). This code reads a file from a local disk and writes it to a socket. For best performance on modern hardware, it's important to keep more than one outstanding I/O operation in flight whenever possible. Thus, I post several BeginRead operations on the file, then when one completes, I call a BeginSend on the socket, and when that completes I do another BeginRead on the file. The details are a bit more complicated than that but at the high level that's the idea. I've got the APM-based code working, but it's very hard to follow and probably has subtle concurrency bugs. I'd love to use TPL for this instead. I figured Task.Factory.FromAsync would just about do it, but there's a catch. All of the I/O samples I've seen (most particularly the StreamExtensions class in the Parallel Extensions Extras) assume one read followed by one write. This won't perform the way I need. I can't use something simple like Parallel.ForEach or the Extras extension Task.Factory.Iterate because the async I/O tasks don't spend much time on a worker thread, so Parallel just starts up another task, resulting in potentially dozens or hundreds of pending I/O operations; way too much! You can work around that by Waiting on your tasks, but that causes creation of an event handle (a kernel object), and a blocking wait on a task wait handle, which ties up a worker thread. My APM-based implementation avoids both of those things. I've been playing around with different ways to keep multiple read/write operations in flight, and I've managed to do so using continuations that call a method that creates another task, but it feels awkward, and definitely doesn't feel like idiomatic TPL. Has anyone else grappled with an issue like this with the TPL? Any suggestions?

    Read the article

  • R ggplot2: Arrange facet_grid by non-facet column (and labels using non-facet column)

    - by tommy-o-dell
    I have a couple of questions regarding facetting in ggplot2... Let's say I have a query that returns data that looks like this: (note that it's ordered by Rank asc, Alarm asc and two Alarms have a Rank of 3 because their Totals = 1798 for Week 4, and Rank is set according to Total for Week 4) Rank Week Alarm Total 1 1 BELTWEIGHER HIGH HIGH 1000 1 2 BELTWEIGHER HIGH HIGH 1050 1 3 BELTWEIGHER HIGH HIGH 900 1 4 BELTWEIGHER HIGH HIGH 1800 2 1 MICROWAVE LHS 200 2 2 MICROWAVE LHS 1200 2 3 MICROWAVE LHS 400 2 4 MICROWAVE LHS 1799 3 1 HI PRESS FILTER 2 CLOG SW 1250 3 2 HI PRESS FILTER 2 CLOG SW 1640 3 3 HI PRESS FILTER 2 CLOG SW 1000 3 4 HI PRESS FILTER 2 CLOG SW 1798 3 1 LOW PRESS FILTER 2 CLOG SW 800 3 2 LOW PRESS FILTER 2 CLOG SW 1200 3 3 LOW PRESS FILTER 2 CLOG SW 800 3 4 LOW PRESS FILTER 2 CLOG SW 1798 (duplication code below) Rank = c(rep(1,4),rep(2,4),rep(3,8)) Week = c(rep(1:4,4)) Total = c( 1000,1050,900,1800, 200,1200,400,1799, 1250,1640,1000,1798, 800,1200,800,1798) Alarm = c(rep("BELTWEIGHER HIGH HIGH",4), rep("MICROWAVE LHS",4), rep("HI PRESS FILTER 2 CLOG SW",4), rep("LOW PRESS FILTER 2 CLOG SW",4)) spark <- data.frame(Rank, Week, Alarm, Total) Now when I do this... s <- ggplot(spark, aes(Week, Total)) + opts( panel.background = theme_rect(size = 1, colour = "lightgray"), panel.grid.major = theme_blank(), panel.grid.minor = theme_blank(), axis.line = theme_blank(), axis.text.x = theme_blank(), axis.text.y = theme_blank(), axis.title.x = theme_blank(), axis.title.y = theme_blank(), axis.ticks = theme_blank(), strip.background = theme_blank(), strip.text.y = theme_text(size = 7, colour = "red", angle = 0) ) s + facet_grid(Alarm ~ .) + geom_line() I get this.... Notice that it's facetted according to Alarm and that the facets are arranged alphabetically. Two Questions: How can I can I keep it facetted by alarm but displayed in the correct order? (Rank asc, Alarm asc). Also, how can I keep it facetted by alarm but show labels from Rank instead of Alarm? Note that I can't just facet on Rank because ggplot2 would see only 3 facets to plot where there are really 4 different alarms. Thanks kindly for the help! Tommy

    Read the article

  • Silverlight async calls and anonymous methods....

    - by JLewis
    I know there are a couple of debates on this kind of thing.... At any rate, I have several cases where I need to populate combobox items based on enumerations returned from the WCF service. In an effort to keep code clean, I started this approach. After looking into it more, I don't think the this works as well is initially thought... I am throwing this out to get recommendations/advice/code snippets on how you would do this or how you currently do this. I may be forced to have a seperate, non anonymous method, procedure. I hate doing this for something like this but at the moment, don't see it working another way... EventHandler<GetEnumerationsForTypeCompletedEventArgs> ev = null; ev = delegate(object eventSender, GetEnumerationsForTypeCompletedEventArgs eventArgs) { if (eventArgs.Error == null) { //comboBox.ItemsSource = eventArgs.Result; // populate combox for display purposes (for now) foreach (Enumeration e in eventArgs.Result) { ComboBoxItem cbi = new ComboBoxItem(); cbi.Content = e.EnumerationValueDisplayed; comboBox.Items.Add(cbi); } // remove event so we don't keep adding new events each time we need an enumeration proxy.GetEnumerationsForTypeCompleted -= ev; } }; proxy.GetEnumerationsForTypeCompleted += ev; proxy.GetEnumerationsForTypeAsync(sEnumerationType); Basically in this example we use ev to hold the anonymous method so we can then use ev from within the method to remove it from the events once called. This prevents this method from getting called more than one time. I suspect that the comboBox local var declared before this call, but within the same method, is not always the combobox originally intended but can't really verify that yet. I may add a tag to it to do some tests and populating to verify. Sorry if this is not clear. I can elaborate more if needed. Thanks Jeff

    Read the article

  • Nagios returns "No output returned from plugin" running process

    - by user56291
    I have a nagios server and a bunch of nagios clients that i currently monitor. All the clients are setup with the following nrpe configuration. check_users, check_load... metrics are successfully displayed on the nagios interface but check_nginx and check_server_proxy displayed as "Unknown"-(No output returned from plugin). As far as i understood nagios simply runs ps command and looks for either the argument strings or the name of the command to verify whether the service is running. Also with -c flag, one can give nagios a threshold to determine the output (ie: -c 1 returns 'OK' for if it finds at least 1 process.) nrpe_local.cfg: ###################################### # Do any local nrpe configuration here ###################################### allowed_hosts =127.0.0.1,10.0.2.181 command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10 command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 command[check_all_disks]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200 command[check_swap]=/usr/lib/nagios/plugins/check_swap -w 50% -c 25% command[check_server_proxy]=/usr/lib/nagios/plugins/check_procs -c 1 -a "api-v1/server.js" command[check_nginx]=/usr/lib/nagios/plugins/check_procs -c 1:30 -C nginx nagios_server.cfg ... define host{ use generic-host ; Name of host template to use host_name plum alias plum address 10.0.2.88 check_command check-host-alive-by-ssh } ... #Check api-proxy-server define service{ use generic-service host_name plum service_description check api proxy service check_command check_nrpe!check_server_proxy } define service { use generic-service ; Name of service template to use host_name plum service_description CHECK_NGINX check_period 24x7 max_check_attempts 3 normal_check_interval 5 retry_check_interval 3 check_command check_nrpe!check_nginx notifications_enabled 1 } Also when i run the command on the nagios client: /usr/lib/nagios/plugins/check_procs -c 1 -a "api-v1/server.js" I get the desired output PROCS OK: 1 process with args 'api-v1/server.js' I would really appreciate any pointers that might help me solve why it nrpe command does not return the desired output on the nagios server panel.

    Read the article

  • Reverse ssh tunneling with Tomato

    - by Deivuh
    Since my ISP restricts some incoming connections, I can't access remotely to my home pc. What I'm trying to do is make a reverse ssh connection from my home's router with Tomato firmware to the office computer, so I can access remotely from the office with that open connection. What I'm doing is running the following from the router: ssh -R 12345:localhost:22 oUser@office Then I leave run top open to keep the connection alive. And from my office what I do is run the following: ssh hUser@localhost -p 12345 but I get the following message with verbose on: OpenSSH_5.5p1 Debian-6, OpenSSL 0.9.8o 01 Jun 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to localhost [::1] port 19999. debug1: Connection established. debug1: identity file /home/oUser/.ssh/id_rsa type -1 debug1: identity file /home/oUer/.ssh/id_rsa-cert type -1 debug1: identity file /home/oUser/.ssh/id_dsa type 2 debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024 debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024 debug1: identity file /home/oUser/.ssh/id_dsa-cert type -1 ssh_exchange_identification: Connection closed by remote host I've password remote access enabled in Tomato's configuration, so I should be able to access without having the public key on *authorized_keys*, but I've even tried adding it and still the same. I've done the same with my home's computer, and it does work perfectly, but it doesn't with the router. Am I doing something wrong? Thanks in advance.

    Read the article

  • How to get correct Set-Cookie headers for NSHTTPURLResponse?

    - by overboming
    I want to use the following code to login to a website which returns its cookie information in the following manner: Set-Cookie: 19231234 Set-Cookie: u2am1342340 Set-Cookie: owwjera I'm using the following code to log in to the site, but the print statement at the end doesn't output anything about "set-cookie". On Snow leopard, the library seems to automatically pick up the cookie for this site and later connections sent out is set with correct "cookie" headers. But on leopard, it doesn't work that way, so is that a trigger for this "remember the cookie for certain root url" behavior? NSMutableURLRequest *request = [[[NSMutableURLRequest alloc] init] autorelease]; [request setURL:[NSURL URLWithString:uurl]]; [request setHTTPMethod:@"POST"]; [request setValue:postLength forHTTPHeaderField:@"Content-Length"]; [request setValue:@"application/x-www-form-urlencoded" forHTTPHeaderField:@"Content-Type"]; [request setValue:@"keep-live" forHTTPHeaderField:@"Connection"]; [request setValue:@"300" forHTTPHeaderField:@"Keep-Alive"]; [request setHTTPShouldHandleCookies:YES]; [request setHTTPBody:postData]; [request setTimeoutInterval:10.0]; NSData *urlData; NSHTTPURLResponse *response; NSError *error; urlData = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]; NSLog(@"response dictionary %@",[response allHeaderFields]);

    Read the article

  • ASP.NET MVC2 and MemberShipProvider: How well do they go together?

    - by Sparhawk
    I have an existing ASP.NET application with lots of users and a large database. Now I want to have it in MVC 2. I do not want to migrate, I do it more or less from scratch. The database I want to keep and not touch too much. I already have my database tables and I also want to keep my LINQ to SQL-Layer. I didn't use a MembershipProvider in my current implementation (in ASP.NET 1.0 that wasn't strongly supported). So, either I write my own Membershipprovider to meet the needs of my database and app or I don't use the membershipprovider at all. I'd like to understand the consequences if I don't use the membership provider. What is linked to that? I understand that in ASP.NET the Login-Controls are linked to the provider. The AccountModel which is automatically generated with MVC2 could easily be changed to support my existing logic. What happens when a user is identified by a an AuthCookie? Does MVC use the MembershipProvider then? Am I overlooking something? I have the same questions regarding RoleProvider. Input is greatly appreciated.

    Read the article

  • Is it possible to store ObjectContext on the client when using WCF and Entity Framework?

    - by Sergey
    Hello, I have a WCF service which performs CRUD operation on my data model: Add, Get, Update and Delete. And I'm using Entity Framework for my data model. On the other side there is a client application which calls methods of the WCF service. For example I have a Customer class: [DataContract] public class Customer { public Guid CustomerId {get; set;} public string CustomerName {get; set;} } and WCF service defines this method: public void AddCustomer(Customer c) { MyEntities _entities = new MyEntities(); _entities.AddToCustomers(c); _entities.SaveChanges(); } and the client application passes objects to the WCF service: var customer = new Customer(){CustomerId = Guid.NewGuid, CustomerName="SomeName"}; MyService svc = new MyService(); svc.Add(customer); // or svc.Update(customer) for example But when I need to pass a great amount of objects to the WCF it could be a perfomance issue because of I need to create ObjectContext each time when I'm doing Add(), Update(), Get() or Delete(). What I'm thinking on is to keep ObjectContext on the client and pass ObjectContext to the wcf methods as additional parameter. Is it possible to create and keep ObjectContext on the client and don't recreate it for each operation? If it is not, how could speed up the passing of huge amount of data to the wcf service? Sergey

    Read the article

  • signalR groups - connecting/disconnecting and sending - am I missing something?

    - by Terry_Brown
    very new to signalR, and have rolled up a very simple app that will take questions for moderation at conferences (felt like a straight forward use case) I have 2 hubs at the moment: - Question (for asking questions) - Speaker (these should receive questions and allow moderation, but that will come later) Solution lives at https://github.com/terrybrown/InterASK After watching a video (by David Fowler/Damian Edwards) (http://channel9.msdn.com/Shows/Web+Camps+TV/Damian-Edwards-and-David-Fowler-Demonstrate-SignalR) and another that I can't find the URL for atm, I thought I'd go with 'groups' as the concept to keep messages flowing to the right people. I implemented IConnected, IDisconnect as I'd seen in one of the videos, and upon debugging I can see Connect fire (and on reload I can see disconnect fire), but it seems nothing I do adds a person to a group. The signalR documentation suggests "Groups are not persisted on the server so applications are responsible for keeping track of what connections are in what groups so things like group count can be achieved" which I guess is telling me that I need to keep some method (static or otherwise?) of tracking who is in a group? Certainly I don't seem able to send to groups currently, though I have no problem distributing to anyone currently connected to the app and implementing the same JS method (2 machines on the same page). I suspect I'm just missing something - I read a few of the other questions on here, but none of them seem to mention IConnected/IDisconnect, which tells me these are either new (and nobody is using them) or that they're old (and nobody is using them). I know this could be considered a subjective question, though what I'm looking for is just a simple means of managing the groups so that I can do what I want to - send a question from one hub, and have people connected to a different hub receive it - groups felt the cleanest solution for this? Many thanks folks. Terry

    Read the article

  • HTTP Error 503 - Service is unavailable (how fix?)

    - by SilverLight
    i have a web site for download mobile files and there many users in my web site. sometimes i have the error below : HTTP Error 503 - Service is unavailable 1-so why this error happens and what is that mean? 2-as i know appache free up itself when it's oveloaded, but what about iis? how can i put some limitations in my server (i have remote access to my server) for prevent this error happening? a.is limitation of dowload's speed efficient for prevent that error's occur? how can i do that? is squid useful for this job or i can do that with another iis extension. b.is limitation of download's Bandwidth efficient for prevent that error's occur? how can i do that (with iis or another extension)? in right side of iis - configure area - i found some limits. what do those limits mean and can i use them for keep my server alive all the time? EDIT: after viewing event viewer of windows - custom views - server rols - web server (iis) i figure out there is no error in that area. but many warnings and information. the latest warnings and information are like below : warning A worker process '2408' serving application pool 'ASP.NET 4.0 (Integrated)' failed to stop a listener channel for protocol 'http' in the allotted time. The data field contains the error number. warning A process serving application pool 'ASP.NET 4.0 (Integrated)' exceeded time limits during shut down. The process id was '6764'. warning A worker process '3232' serving application pool 'ASP.NET 4.0 (Integrated)' failed to stop a listener channel for protocol 'http' in the allotted time. The data field contains the error number. warning A process serving application pool 'ASP.NET 4.0 (Integrated)' exceeded time limits during shut down. The process id was '3928'. thanks in advance best regards

    Read the article

  • Database with "Open Schema" - Good or Bad Idea?

    - by Claudiu
    The co-founder of Reddit gave a presentation on issues they had while scaling to millions of users. A summary is available here. What surprised me is point 3: Instead, they keep a Thing Table and a Data Table. Everything in Reddit is a Thing: users, links, comments, subreddits, awards, etc. Things keep common attribute like up/down votes, a type, and creation date. The Data table has three columns: thing id, key, value. There’s a row for every attribute. There’s a row for title, url, author, spam votes, etc. When they add new features they didn’t have to worry about the database anymore. They didn’t have to add new tables for new things or worry about upgrades. This seems like a terrible idea to me, but it seems to have worked out for Reddit. Is it a good idea in general, though? Or is it a peculiarity of Reddit that happened to work out for them?

    Read the article

  • How to roll the log file on startup in logback

    - by Mike Q
    Hi all, I would like to configure logback to do the following. Log to a file Roll the file when it reaches 50MB Only keep 7 days worth of logs On startup always generate a new file (do a roll) I have it all working except for the last item, startup roll. Does anyone know how to achieve that? Here's the config... <appender name="File" class="ch.qos.logback.core.rolling.RollingFileAppender"> <layout class="ch.qos.logback.classic.PatternLayout"> <Pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg \(%file:%line\)%n</Pattern> </layout> <File>server.log</File> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <FileNamePattern>server.%d{yyyy-MM-dd}.log</FileNamePattern> <!-- keep 7 days' worth of history --> <MaxHistory>7</MaxHistory> <TimeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP"> <MaxFileSize>50MB</MaxFileSize> </TimeBasedFileNamingAndTriggeringPolicy> </rollingPolicy> </appender>

    Read the article

  • Insufficient storage available to create shadow copy

    - by Bob.at.SBS
    I have used the "Windows 7 File Recovery" tool under Windows 8 to create system image backups to an external USB hard drive. I built a new Windows 8.1 machine, and I want to create my first system image backup of that machine to the same USB hard drive. The "Windows 7 File Recovery" tool is gone in Windows 8.1, but wbAdmin is alive and well: wbAdmin start backup -backupTarget:\\?\Volume{2a2b...994f} -allCritical -quiet fails with this text displayed: wbadmin 1.0 - Backup command-line tool (C) Copyright 2013 Microsoft Corporation. All rights reserved. Retrieving volume information... This will back up (EFI System Partition),(C:),Recovery (300.00 MB) to \?\Volume {2a2b1255-3a86-11e3-be86-b8ca3a83994f}. The backup operation to F: is starting. Creating a shadow copy of the volumes specified for backup... Summary of the backup operation: The backup operation stopped before completing. The backup operation stopped before completing. Detailed error: ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. The EFI System Partition is 100 MB The Recovery Partition is 300 MB The C partition is 1.72 TB, NTFS, 218 GB used, 1.51 TB free The destination drive is 1.81 TB, NTFS, 678 GB used, 1.15 TB free I've fiddled with vssadmin resize shadowstorage, with no change in the error. vssadmin list shadowstorage displays: Shadow Copy Storage association For volume: (C:)\?\Volume{37a0...263}\ Shadow Copy Storage volume: (C:)\?\Volume{37a0...263}\ Used Shadow Copy Storage space: 2.39 GB (0%) Allocated Shadow Copy Storage space: 2.81 GB (0%) Maximum Shadow Copy Storage space: 531 GB (30%) Shadow Copy Storage association For volume: (F:)\?\Volume{2a2...94f}\ Shadow Copy Storage volume: (F:)\?\Volume{2a2...94f}\ Used Shadow Copy Storage space: 334 GB (17%) Allocated Shadow Copy Storage space: 337 GB (18%) Maximum Shadow Copy Storage space: UNBOUNDED (922154758%) (Yeah, the "percent calculation" for UNBOUNDED is seriously bogus.) I've run SFC /verifyonly and it seems happy. I've verified that the new `Volume Shadow Copy" service starts when I start the backup operation. Any suggestions?

    Read the article

  • How can I encrypt CoreData contents on an iPhone

    - by James A. Rosen
    I have some information I'd like to store statically encrypted on an iPhone application. I'm new to iPhone development, some I'm not terribly familiar with CoreData and how it integrates with the views. I have the data as JSON, though I can easily put it into a SQLITE3 database or any other backing data format. I'll take whatever is easiest (a) to encrypt and (b) to integrate with the iPhone view layer. The user will need to enter the password to decrypt the data each time the app is launched. The purpose of the encryption is to keep the data from being accessible if the user loses the phone. For speed reasons, I would prefer to encrypt and decrypt the entire file at once rather than encrypting each individual field in each row of the database. Note: this isn't the same idea as Question 929744, in which the purpose is to keep the user from messing with or seeing the data. The data should be perfectly transparent when in use. Also note: I'm willing to use SQLCipher to store the data, but would prefer to use things that already exist on the iPhone/CoreData framework rather than go through the lengthy build/integration process involved.

    Read the article

  • Can't setup 3 nodes MongoDB recplica set

    - by Victor Lin
    I just follow instructions in MongoDB document Replica Sets - Basics to setup a 3-node Replica set. Everything goes fine when I do the initiate and add first node in the primary. [foo@host-a mongodb]$ bin/mongo localhost MongoDB shell version: 1.8.2 connecting to: localhost > rs.initiate() { "info2" : "no configuration explicitly specified -- making one", "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } > rs.add("host-b") { "ok" : 1 } So far so good, but when I try to add third node myset:PRIMARY> rs.addArb("host-c") Sun Aug 7 22:57:09 MessagingPort recv() errno:104 Connection reset by peer 127.0.0.1:27017 Sun Aug 7 22:57:09 SocketException: remote: error: 9001 socket exception [1] Sun Aug 7 22:57:09 DBClientCursor::init call() failed Sun Aug 7 22:57:09 query failed : local.$cmd { count: "system.replset", query: {}, fields: {} } to: 127.0.0.1 Sun Aug 7 22:57:09 Error: error doing query: failed shell/collection.js:150 Sun Aug 7 22:57:09 trying reconnect to 127.0.0.1 Sun Aug 7 22:57:09 reconnect 127.0.0.1 ok As result, the current primary became secondary, and the host-b was marked as dead, but actually, it is still alive. myset:SECONDARY> rs.status() { "set" : "myset", "date" : ISODate("2011-08-08T04:03:23Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "host-a:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "optime" : { "t" : 1312775799000, "i" : 1 }, "optimeDate" : ISODate("2011-08-08T03:56:39Z"), "self" : true }, { "_id" : 1, "name" : "host-b", "health" : 0, "state" : 6, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2011-08-08T04:03:22Z"), "errmsg" : "still initializing" } ], "ok" : 1 } How could this happen? I just follow the guide in the document, did I do something wrong? Moreover, I can't do anything on current secondary server. It doesn't allow me to reconfig on the secondary node, but the problem is there is no primary node. myset:SECONDARY> rs.reconfig({}) { "errmsg" : "replSetReconfig command must be sent to the current replica set primary.", "ok" : 0 } Any ideas?

    Read the article

  • Android RestTemplate Ok on emulator but fails on real device

    - by Hossein
    I'm using spring RestTemplate and it works perfect on emulator but if I run my app on real device I get HttpMessageNotWritableException ............ nested exception is java.net.SocketException: Broken pipe Here is some lines of my code(keep in mind my app works perfect on emulator) ............ LoggerUtil.logToFile(TAG, "url is [" + url + "]"); LoggerUtil.logToFile(TAG, "NetworkInfo - " + connectivityManager.getActiveNetworkInfo()); ResponseEntity<T> responseEntity = restTemplate.exchange(url, HttpMethod.POST, requestEntity, clazz); ............. I know my device's network works perfect because all other applications on my device are working and also using device browser I'm able to connect to my server so my server is available. My server is up and running and my device is able to connect to my server so why I get java.net.SocketException: Broken pipe ?!!!!!!! Before I call restTemplate.exchange() I log NetworkInfo and it looks ok -type: WIFI -status: CONNECTED/CONNECTED -isAvailable: true Thanks in advance. Update: It is really weird Even if I use HttpURLConnection, it works perfectly on emulator but on real device I get 400 Bad Request Here is my code HttpURLConnection con = null; try { String url = ....; LoggerUtil.logToFile(TAG, "url [" + url + "]" ); con = (HttpURLConnection) new URL(url).openConnection(); con.setRequestMethod("POST"); con.setRequestProperty("Connection", "Keep-Alive"); con.setDoInput(true); con.setDoOutput(true); con.setUseCaches(false); con.connect(); LoggerUtil.logToFile(TAG, "con.getResponseCode is " + con.getResponseCode()); LoggerUtil.logToFile(TAG, "con.getResponseMessage is " + con.getResponseMessage()); } catch(Throwable t){ LoggerUtil.logToFile(TAG, "*** failed [" + t + "]" ); } in log file I see con.getResponseCode is 400 con.getResponseMessage is Bad Request

    Read the article

  • How to explain to users the advantages of dumb primary key?

    - by Hao
    Primary key attractiveness I have a boss(and also users) that wants primary key to be sophisticated/smart/attractive control number(sort of like Social Security number, or credit card number format) I just padded the primary key(in Views) with zeroes to appease their desire to make the control number sophisticated,smart and attractive. But they wanted it as: first 2 digits as client code, then 4 digits as year year, then last 4 digits as transaction number on that client on a given year, then reset the transaction number of client to 1 when next year flows. Each client's transaction starts with 1. e.g. WM20090001, WM20090002, BB2009001, WM20100001, BB20100001 But as I wanted to make things as simple as possible, I forgo embedding their suggested smartness in primary key, I just keep the primary key auto increments regardless of client and year. But to make it not dull-looking(they really are adamant to make the primary key as smart control number), I made the primary key appears to them smart, on view query, I put the client code and four digit year code on front of the eight-zero padded autoincrement key, i.e. WM200900000001. Sort of slug-like information on autoincremented primary key. Keeping primary key autoincrement regardless of any other information, we are able keep other potential side effects problem when they edit a record, for example, if they made a mistake of entering the transaction on WM, then they edit the client code to BB, if we use smart primary key, the primary keys of WM customer will have gaps in their control number. Or worse yet, instead of letting the control numbers have gaps/holes, the user will request that subsequent records of that gap should shift up to that gap and have their subsequent primary keys re-adjust(decremented). How do you deal with these user requests(reasonable or otherwise)? Do you yield to their request? Or just continue using dumb primary key and explain them the repercussions of having a very smart/sophisticated primary key and educate them the significant advantages of having a dumb primary key? P.S. quotable quote(http://articles.techrepublic.com.com/5100-10878_11-1044961.html): "If you hold your tongue the first time users ask what is for them a reasonable request, things will work a lot better in the end."

    Read the article

  • iSCSI, failover and XenServer

    - by jemmille
    I have an iSCSI fail over implementation setup so if one of my storage units fails the other takes over immediately (it also runs the NFS shares). When fail over occurs, volumes are exported, the IP is switched to the other machine and the targets are reconfigured. The fail over of the storage system itself works just fine. I use NexentaStor for my filer. When I do a test (manual) fail over of my storage the following occurs: Note: I run the admin VM's on NFS and customer based VM's on iSCSI All NFS based VM's remain up and working perfectly through the failover and after All VM 's running on iSCSI eventually report the following: An error about not being able to write to a particular block An error about journaling not working Then the file system goes RO To get the VM's working again I have to do the following: Force shutdown of the "broken" VM's. Detach the iSCSI SR Re-attach the iSCSI SR Boot the VM on a different server (5 in my pool) If I don't boot on a different server I get this error "Internal error: Failure("The VDI <uuid&gt; is already attached in RW mode; it can't be attached in RO mode!")" The only way I have found to fix that error is to reboot the entire server it was running on previously which is obviously a huge pain. Currently multipathing is NOT enabled (but can be and the same thing still occurs). I have edited much of the /etc/iscsid.conf file to work with the timeout settings but to no avail. In short, my storage fails over properly but XenServer does not keep the connection alive. As a thought, the error that shows up in #4 above might be the ultimate cause and fixing that would fix everything? Any help would be appreciated more than you know.

    Read the article

  • Is background tasks the solution for this problem?

    - by Trinca
    Hi. I need to develop an enterprise app that monitors the network traffic. Basically it detects if the user is in wi-fi or cellular data and save the amount of bytes was sent and received in a period of time. I saw an App at the AppStore that do exactly this job. Detecting wi-fi or cellular data is quite simple using the Reachability Sample provided by Apple. My problem is to keep monitoring the bytes sent and received while the app is in background. As it is an enterprise App, I used UIBackgroundModes "voip" to avoid the app to be terminated. I also installed the setKeepAliveTimeout method and I'm able to see the logs each 10 minutes, BUT only for 10 seconds after the method runs. I mean, setKeepAliveTimeout brings my App to run a Timer for 10 seconds each 1o minutes. I'm thinking wether or not a task in background is the best solution for my problem. I'll appreciate any comments. EDIT: Ok guys. Thats the perfect way to do it. First of all you must read this: http://www.christian-fries.de/blog/files/tag-ios.html I tried this and it works really fine. All we need to do is to create a second thread detached from the main one. This way we have a continuos threading running forever. You must see the GCD docs at Apple's website also. Second thing you should consider for an enterprise App is to set it up as a voip App, this way iOS will put your App running even after a reboot. It's a special behavior iOS has to keep voip Apps running. Thats it guys. I hope it can help you.

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >