Search Results

Search found 21939 results on 878 pages for 'victor may'.

Page 49/878 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • BizTalk host throttling &ndash; Singleton pattern and High database size

    - by S.E.R.
    Originally posted on: http://geekswithblogs.net/SERivas/archive/2013/06/30/biztalk-host-throttling-ndash-singleton-pattern-and-high-database-size.aspxI have worked for some days around the singleton pattern (for those unfamiliar with it, read this post by Victor Fehlberg) and have come across a few very interesting posts, among which one dealt with performance issues (here, also by Victor Fehlberg). Simply put: if you have an orchestration which implements the singleton pattern, then performances will continuously decrease as the orchestration receives and consumes messages, and that behavior is more obvious when the orchestration never ends (ie : it keeps looping and never terminates or completes). As I experienced the same kind of problem (actually I was alerted by SCOM, which told me that the host was being throttled because of High database size), I thought it would be a good idea to dig a little bit a see what happens deep inside BizTalk and thus understand the reasons for this behavior. NOTE: in this article, I will focus on this High database size throttling condition. I will try and work on the other conditions in some not too distant future… Test conditions The singleton orchestration For the purpose of this study, I have created the following orchestration, which is a very basic implementation of a singleton that piles up incoming messages, then does something else when a certain timeout has been reached without receiving another message: Throttling settings I have two distinct hosts : one that hosts the receive port (basic FILE port) : Ports_ReceiveHostone that hosts the orchestration : ProcessingHost In order to emphasize the throttling mechanism, I have modified the throttling settings for each of these hosts are as follows (all other parameters are set to the default value): [Throttling thresholds] Message count in database: 500 (default value : 50000) Evolution of performance counters when submitting messages Since we are investigating the High database size throttling condition, here are the performance counter that we should take a look at (all of them are in the BizTalk:Message Agent performance object): Database sizeHigh database sizeMessage delivery throttling stateMessage publishing throttling stateMessage delivery delay (ms)Message publishing delay (ms)Message delivery throttling state durationMessage publishing throttling state duration (If you are not used to Perfmon, I strongly recommend that you start using it right now: it is a wonderful tool that allows you to open the hood and see what is going on inside BizTalk – and other systems) Database size It is quite obvious that we will start by watching the database size and high database size counters, just to see when the first reaches the configured threshold (500) and when the second rings the alarm. NOTE : During this test I submitted 600 messages, one message at a time every 10ms to see the evolution of the counters we have previously selected. It might not show very well on this screenshot, but here is what happened: From 15:46:50 to 15:47:50, the database size for the Ports_ReceiveHost host (blue line) kept growing until it reached a maximum of 504.At 15:47:50, the high database size alert fires At first I was surprised by this result: why is it the database size of the receiving host that keeps growing since it is the processing host that piles up messages? Actually, it makes total sense. This counter measures the size of the database queue that is being filled by the host, not consumed. Therefore, the high database size alert is raised on the host that fills the queue: Ports_ReceiveHost. More information is available on the Public MPWiki page. Now, looking at the Message publishing throttling state for the receiving host (green line), we can see that a throttling condition has been reached at 15:47:50: We can also see that the Message publishing delay(ms) (blue line) has begun growing slowly from this point. All of this explains why performances keep decreasing when a singleton keeps processing new messages: the database size grows and when it has exceeded the Message count in database threshold, the host is throttled and the publishing delay keeps increasing. Digging further So, what happens to the database queue then? Is it flushed some day or does it keep growing and growing indefinitely? The real question being: will the host be throttled forever because of this singleton? To answer this question, I set the Message count in database threshold to 20 (this value is very low in order not to wait for too long, otherwise I certainly would have fallen asleep in front of my screen) and I submitted 30 messages. The test was started at 18:26. At 18:56 (ie : exactly 30min later) the throttling was stopped and the database size was divided by 2. 30 min later again, the database size had dropped to almost zero: I guess I’ll have to find some documentation and do some more testing before I sort this out! My guess is that some maintenance job is at work here, though I cannot tell which one Digging even further If we take a look at the Message delivery throttling state counter for the processing host, we can see that this host was also throttled during the submission of the 600 documents: The value for the counter was 1, meaning that Message delivery incoming rate for the host instance exceeds the Message delivery outgoing rate * the specified Rate overdrive factor (percent) value. We will see this another day… :) A last word Let’s end this article with a warning: DO NOT CHANGE THE THROTTLING SETTINGS LIGHTLY! The temptation can be great to just bypass throttling by setting very high values for each parameter (or zero in some cases, which simply disables throttling). Nevertheless, always keep in mind that this mechanism is here for a very good reason: prevent your BizTalk infrastructure from exploding!! So whatever you do with those settings, do a lot of testing and benchmarking!

    Read the article

  • JavaOne pictures and Community Commentary on JCP Awards

    - by heathervc
    We posted some pictures from JCP related events at JavaOne 2012 on the JCP Facebook page today.  The 2012 JCP Program Award winners and some of the nominees responded to the community recognition of their achievements during some of the JCP events last week.     “Our job on the EC is to balance the need of innovation – so we don’t standardize too early, or too late. We try to find that sweet spot that makes innovation and standardization work together, and not against each other.”- Ben Evans, CEO of jClarity and Executive Committee (EC) representative of the London Java Community, 2012 JCP Member/Participant of the Year Winner“SouJava has been evangelizing the Java platform, promoting the Java ecosystem in Brazil, and contributing to JSRs for several years. It’s very gratifying to have our work recognized, on behalf of many developers and Java User Groups around the world. This really is the work of a large group of people, represented by the few that can be here tonight.”- Michael Santos, representative of SouJava, 2012 JCP Member/Participant of the Year Winner "In the last years Credit Suisse has contributed to the development of Java EE specifications through participation in many customer advisory boards, through statements of requirements for extensions to the core Java related products in use, and active participation in JSRs. Winning the JCP Outstanding Spec Lead Award 2012 is very encouraging for our engagement and also demonstrates the level of expertise and commitment to drive the evolution of Java. Victor Grazi is happy and honored to receive this award." - Susanne Cech Previtali, Executive Committee (EC) representative of Credit Suisse, accepting award for 2012 JCP Outstanding Spec Lead Winner "Managing a JSR is difficult. There are so many decisions to be made and so many good and varied opinions, you never really know if you have decided correctly. The key to success is transparency and collaboration. I am truly humbled by receiving this award, there are so many other active JSRs.” Victor added that going forward in the JCP EC, they would like to simplify and open the process of participation – being addressed in the JCP.Next initiative of the JCP EC. "We would also like to encourage the engagement of universities, professors and students – as an important part of the Java community. While innovation is the lifeblood of our community and industry, without strong standards and compatibility requirements, we all end up in a maze of technology where everything is slightly different and doesn’t quite work with everything else." Victo Grazi, Executive Committee (EC) representative of Credit Suisse, 2012 JCP Outstanding Spec Lead Winner“I am very pleased, of course, to accept this award, but the credit really should go to all of those who have participated in the work of the JCP, while pushing for changes in the way it operates.  JCP.Next represents three JSRs. The first two are done, but the final step, JSR 358, is the complicated one, and it will bring in the lawyers. Just to give you an idea of what we’re dealing with, it affects licensing, intellectual property, patents, implementations not based on the Reference Implementation (RI), the role of the RI, compatibility policy, possible changes to the Technical Compatibility Kit (TCK), transparency, where do individuals fit in, open source, and more.”- Patrick Curran, JCP Chair, Spec Lead on JCP.Next JSRs (JSR 348, JSR 355 and JSR 358), 2012 JCP Most Significant JSR Winner“I’m especially glad to see the JCP community recognize JCP.Next for its importance. The governance work it represents is KEY to moving the Java platform forward and the success of the technology.”- John Rizzo, Executive Committee (EC) representative of Aplix Corporation, JSR Expert Group Member “I am deeply honored to be nominated. I had the privilege to receive two awards on behalf of Expert Groups and Spec Leads two years ago. But this time, I am nominated personally, which values my own contribution to the JCP, and of course, participation in JSRs and the EC work. I’m a fan of Agile Principles and Values Working. Being an Agile Coach and Consultant, I use it for some of the biggest EC Member companies and projects. It fuels my ability to help the JCP become more agile, lean and transparent as part of the JCP.Next effort.” - Werner Keil, Individual Executive Committee (EC) Member, a 2012 JCP Member/Participant of the Year Nominee, JSR Expert Group Member“The JCP ever has been some kind of institution for me,” Markus said. “If in technical doubt, I go there, look for the specifications of the implementation I work with at the moment and verify what I had observed. Since the beginning of my Java journey more than 12 years back now, I always had a strong relationship with the JCP. Shaping the future of a technology by joining the JCP – giving feedback and contributing to the road ahead through individual JSRs – that brings you to a whole new level.”Calling himself, “the new kid on the block,” he explained that for years he was afraid to join the JCP and contribute. But in reality, “Every single one of the big names I meet from the different Expert Groups is a nice person. People you can actually work with,” he says. “And nobody blames you for things you don't know. As long as you are committed and bring what is worth the most: passion, experiences and the desire to make a difference.” - Markus Eisele, a 2012 JCP Member of the Year Nominee, JSR Expert Group MemberCongratulations again to all of the nominees and winners of the JCP Program Awards.  Next year, we will add another award for the group of JUG members (not an entire JUG) that makes the best contribution to the Adopt-a-JSR program.  Let us know if you have other suggestions or improvements.

    Read the article

  • Maybe VS2010 needs Repairing

    - by wisecarver
    OK..So John Papa, Nasir Aziz and myself are trying to help Victor Gaudioso figure out why he can’t see the ADO.NET Entity Data Model template in Visual Studio 2010. At first I thought maybe he was using the Pro version and this was one of those oddities. (After all I’m using the Ultimate version and I also see this template in the Express version.) Nope..Vic was also using VS2010 Ultimate. Believe me Vic is a Silverlight and WPF genius and knows his way around in VS. He was working with the latest...(read more)

    Read the article

  • « Le tactile est une technologie de transition », mais vers quoi ? Un designer d'Apple trouve les interactions homme-machine trop pauvres

    « Le tactile est une technologie de transition » Mais vers quoi ? Un designer d'Apple pense que les interactions homme-machines actuelles sont trop pauvres « Pour moi, affirmer qu'un image sous une glace (NDT : Pictures Under Glass) est le futur des intéractions hommes machines (IHM) revient à dire que l'avenir de la photo est le noir et blanc. [Le tactile] est de manière évidente une technologie de transition. Et plus courtes sont les transitions, mieux c'est ». Voici comment Bret Victor, Human-Interface Operator chez Apple, résume sa pensée. Par « Picture Under Glass », il décrit le tactile actuel. Autrement dit les tablettes et autres smartphones dont les écrans sont lisse...

    Read the article

  • Poor Customer Service Example

    - by MightyZot
    Lately I have been frustrated by examples of poor customer service. At least one is worth writing about because I don’t think companies realize the effects of their service policies on loyal customers. Bad Customer Service Example #1 Recently, I received an offer in the mail from my cable company, suddenLink. The offer was for an updated TiVo for $12/mo. Normally I ignore offers like this one because I already have the service they’re offering and many times advertisers are offering alternatives to what is already an excellent product offering. I tend to exhibit a high level of loyalty to the products and brands that I use. In this case, we were looking to upgrade our TiVo and this deal is attractive for several reasons: I don’t want to pay a huge amount up-front for the device, so paying a monthly amount for the device is attractive to me. My entertainment is almost all on a single invoice. I’m no longer going to be billed by suddenLink and TiVo. TiVo is still involved, so I am still loyal to the brand I love. I have resisted moving to other DVRs and services for over a decade. I called suddenLink to order the new TiVo and was rewarded with great customer service. In fact, I can’t remember ever getting poor customer service from suddenLink. They are always there to answer my technical support questions and they are very responsive to outages. Then I called TiVo. First of all, I chose the option on the phone system to change or cancel my service, which was consequently met by an inordinate hold time. (I’m calling this time inordinate because I get through very quickly if I want to purchase something.) This is a trend that I’ve noticed with companies – if you want me to be loyal to you, it should be just as easy to cancel your service as it is to purchase it. Because, I should never be cancelling because I am unhappy. And, if you ever want my business again, or more importantly a reference, then you’d better make the exit door open just as easy as the enter door. After quite some time on hold, I talked to “Victor” who was very courteous. Victor canceled my service and then told me that I could keep my current TiVo and transfer recorded programs to it from the new TiVo.  Cool I said, but what about the cost?  He said there was no extra cost.  This was also attractive to me because I paid for my TiVo and it would be good to use it for something at least.  That was four months ago. This month I noticed that TiVo was still charging me for my original service. I was a little upset, but I decided to give them the benefit of the doubt. After all, I am a loyal TiVo customer and I have resisted moving to other solutions for over a decade. I’m sure they will do whatever it takes to keep my business, through TiVo or through suddenLink. After quite some time on hold, I was able to talk to a customer service representative, “Les”. I explained that I am a loyal TiVo customer, but I purchased this deal through my cable provider. I’m still with TiVo, I just wanted a single bill and to take advantage of the pay-over-time option. “Les” told me that he was very sorry to hear that I’m leaving TiVo, to which I responded again that I wasn’t leaving TiVo, I just want one invoice, and to take advantage of the pay-over-time. So, after explaining that I requested a termination of the non-suddenLink account (TiVo can see both of course), I was put on hold again for quite some time while my refund was “approved”.  “Les” said that he could see my cancellation request back in July. Note that it is now November, so they have billed me inappropriately four times. After quite some time, he came back on the line and told me that he was able to “get me most of my money back.” He got approval to refund 90 days. Even though I requested cancellation of one of my accounts, TiVo has that cancellation request on file and they admit overbilling me, I am going to get “most” of my money back. To top this experience off, when we were ready to hang up, “Les” told me that he was sorry to see me go and that he hoped I would come back to TiVo again. Again, I explained to “Les” that I have not left TiVo. I am just paying them through suddenLink. At that point, he went into a small dissertation about how this is a special arrangement they have with suddenLink and very few others. He made me feel like I was doing something wrong. Why should I feel that way? TiVo made the deal with suddenLink, not me, and the deal seemed like a good compromise for me to be able to get what I need. Here is what TiVo Customer Service accomplished on those two calls – I no longer feel like I need to be loyal to the TiVo brand or service. If I had been treated better on these two calls, I would still be recommending TiVo to my friends. They would still be getting revenue from a loyal customer, who paid the same rate for over a decade, and this article wouldn’t be here for you to read. Interesting… In my opinion, if you want brand loyalty, be loyal to your customers!

    Read the article

  • Solaris Day in NY and Boston

    - by unixman
    Hey all, -- We're hosting yet another Solaris event in New York -- this one will be on November 29th and focused on some key in-depth technologies in Solaris 11, which had just been released earlier this month.  Speakers include Dave Miner, Glenn Brunette and Jeff Victor.  It starts in the morning and goes through lunch; check out the agenda from the below link. Topics include: new and improved installation and package management experience, virtualization, ZFS and security.Please check it out and come join us! The RSVP link is belowhttp://www.oracle.com/go/?&Src=7239490&Act=34&pcode=NAFM10128512MPP016 Additionally, if you are in the Boston area, an identical event will be held in Burlington the following day, on November 30th. The RSVP link for that is http://www.oracle.com/us/dm/h2fy11/21285-nafm10128512mpp013-oem-525338.html Hope to see you there!

    Read the article

  • Pictures from VTdotNET VS2010 Launch

    Vermont.NET’s April meeting coincided with the Visual Studio 2010 launch so we had our own launch event! With cake… It was our biggest meeting attendance ever with 60 signed up and 56 signed in at the door. Four people presented…”some of our favorite things about VS2010 and .NET 4.0”. The presenters were Victor Castro, Eric Hall, Rob Hale and myself. Instead of the usual pizza, we had a feast of sandwiches, rollups, chips and veggies & dip, all very generously...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • 10th Annual JCP Award Winners Announced

    - by heathervc
    The 10th JCP Annual Awards were presented in three categories yesterday evening at the JCP Party during JavaOne.  Congratulations to the winners and the nominees for the contributions to the Java Community! JCP Member/Participant of the Year London Java Community and SouJava For their historic contribution to the Adopt a JSR program and supporting Java developers through the JCP. Outstanding Spec Lead Victor Grazi, Credit Suisse, (JSR 354, Money and Currency API) For his dedicated, focused expertise in solving issues representing Money and Currencies. Most Significant JSR JSR 348, JSR 355 and JSR 358, JCP.Next, These three JSRs will set the direction and procedures for the next-generation JCP. You can view profiles of all the nominees on jcp.org.

    Read the article

  • ssh key error - Permission denied (publickey,gssapi-keyex,gssapi-with-mic)

    - by user1963938
    Amazon Ec2 :: Redhat 6. 64 Bit I'm trying to follow the socks5 guidelines (http://www.catonmat.net/blog/linux-socks5-proxy/ ) to open a socks on one of our servers but unfortunately I got suck at step 1 . ssh -N -D 0.0.0.0:1080 localhost I get error Permission denied (publickey,gssapi-keyex,gssapi-with-mic). How do I fix it ? More debug info ssh -v -f -N -D 0.0.0.0:1080 localhost OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to localhost [127.0.0.1] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /root/.ssh/identity type -1 debug1: identity file /root/.ssh/id_rsa type -1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'localhost' is known and matches the RSA host key. debug1: Found key in /root/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic debug1: Next authentication method: gssapi-keyex debug1: No valid Key exchange context debug1: Next authentication method: gssapi-with-mic debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_0' not found debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_0' not found debug1: Unspecified GSS failure. Minor code may provide more information debug1: Unspecified GSS failure. Minor code may provide more information debug1: Next authentication method: publickey debug1: Trying private key: /root/.ssh/identity debug1: Trying private key: /root/.ssh/id_rsa debug1: Trying private key: /root/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

    Read the article

  • Can't Access TFS 2010 Beta 2 from Visual Studio 2010 Beta 2 when domain joined

    - by Brian Sullivan
    I'm experimenting with an installation of TFS 2010 Beta 2 on a virtual machine under VirtualBox running Windows Server 2008. When I've got the server in a workgroup, I can connect to it from Visual Studio just fine, as long as I provide credentials for a local user on the server machine when prompted by the "Connect to Team Foundation Server" dialog. The desktop I'm running Visual Studio on is joined to a domain. However, when I join the server to the domain, I can no longer connect to it from Visual Studio. I get a pretty generic error message: "TF31002 - Unable to connect to team foundation server". It gives me several different possible problems, including an incorrect address or an incorrect username and password. I've already added the domain Windows identity with which I'm logged on the the desktop to the TFS Admins group on the server, so I don't think it's a username/password problem. I've also tried putting the literal IP address of the server in the dialog address box instead of the machine name, but still no dice. I made sure that network discovery was enabled on the server, too, and can navigate to "\\webserver2008" in Windows Explorer without any problems. Shouldn't be a firewall problem, since the TFS install creates the appropriate exceptions in Windows Firewall. It's all a bit confusing, since it seems to work when the server is in a workgroup. Note: I'm a dev, not an admin, so there are many subtleties of server administration with which I'm not familiar. Please make no assumptions about what I may or may not have tried; what may be obvious to you may have never occurred to me. Thanks in advance!

    Read the article

  • Xenserver 6.2 cannot send alert using gmail smtp

    - by Crimson
    I'm using Xenserver 6.2 and configured ssmtp.conf an mail_alert.conf in order to receive alerts through email. I followed the instructions on http://support.citrix.com/servlet/KbServlet/download/34969-102-706058/reference.pdf document. I'm using gmail smtp to send the emails. When i try: [root@xen /]# ssmtp [email protected] from the command line and try to send the email, no problem. It is right on the way. But when i set some VM to generate alerts, alerts are generated. I see in XenCenter but emailing is not working. I see this in /var/log/maillog file: May 27 16:17:09 xen sSMTP[30880]: Server didn't like our AUTH LOGIN (530 5.7.0 Must issue a STARTTLS command first. 18sm34990758wju.15 - gsmtp) From command line every thing works fine. This is the log record for the above command line operation: May 27 15:55:58 xen sSMTP[27763]: Creating SSL connection to host May 27 15:56:01 xen sSMTP[27763]: SSL connection using RC4-SHA May 27 15:56:04 xen sSMTP[27763]: Sent mail for [email protected] (221 2.0.0 closing connection ln3sm34863740wjc.8 - gsmtp) uid=0 username=root outbytes=495 Any ideas?

    Read the article

  • Scripting an automated SQLServer 2008 DR move

    - by ItsAMystery
    Hi All We use the built in logshipping in SQLServer to logship to our DR site but once in a month do a DR test which requires us to move back and forth between our Live and BAckup servers. We run multiple (30) databases on the system so manually backing up the final logs and disabling the jobs is too much work and takes too long. I though no problem, I will script it but have run into trouble with it always complaninig that the final logship is too early to apply even though I dont export the final log until putting the database into norecovery mode. Firstly, does any one no a simple and reliable way of doing this? I have lokoed at some 3rd party software (redgate sqlbackup I think it was) but that didnt make it easy in this situation either. What I want to be able to do is basically run a script (a series of stored procedures) to get me to DR and run another to get me back with no dataloss. My scripts are very simplistic at the moment but here they are: 2 servers Primary Paris Secondary ParisT The StartAgentJobAndWait is a script written by someone else (ta) and just checks the jobs have finished or quits it if it never ends. At the moment I am just using a test database called BOB2 but if I can get it working will pass in the database and job names. from PARIS: /* Disable backup job */ exec msdb..sp_update_job @job_name = 'LSBackup_BOB2', @enabled = 0 exec PARIST.msdb..sp_update_job @job_name = 'LSCopy_PARIS_BOB2', @enabled = 0 exec PARIST.msdb..sp_update_job @job_name = 'LSRestore_PARIS_BOB2', @enabled = 0 exec PARIST.master.dbo.DRStage2 ParisT DRStage2 DECLARE @RetValue varchar (10) EXEC @RetValue = StartAgentJobAndWait LSCopy_PARIS_BOB2 , 2 SELECT ReturnValue=@RetValue if @RetValue = 1 begin print 'The Copy Task completed Succesffuly' END ELSE print 'The Copy task failed, This may or may not be a problem, check restore state of database' SELECT @RetValue = 0 EXEC @RetValue = StartAgentJobAndWait LSRestore_PARIS_BOB2 , 2 SELECT ReturnValue=@RetValue if @RetValue = 1 begin print 'The Restore Task completed Succesffuly' END ELSE print 'The Copy task failed, This may or may not be a problem, check restore state of database' exec PARIS.master.dbo.DRStage3 /* Do the last logship and move it to Trumpington */ BACKUP log "BOB2" to disk='c:\drlogshipping\BOB2.bak' with compression, norecovery EXEC xp_cmdshell 'copy c:\drlogshipping \\192.168.7.11\drlogshipping' EXEC PARIST.master.dbo.DRTransferFinish AS BEGIN restore database "BOB2" from disk='c:\drlogshipping\bob2.bak' with recovery

    Read the article

  • LDAP change user pass on client

    - by Sean
    I am trying to allow ldap users to change their password on client machines. I have tried pam every which way I can think of /etc/ldap.conf & /etc/pam_ldap.conf, as well. At this point I'm stuck. Client: Ubuntu 11.04 Server: Debian 6.0 The current output is this: sobrien4@T-E700F-1:~$ passwd passwd: Authentication service cannot retrieve authentication info passwd: password unchanged /var/log/auth.log gives this during the command: May 9 10:49:06 T-E700F-1 passwd[18515]: pam_unix(passwd:chauthtok): user "sobrien4" does not exist in /etc/passwd May 9 10:49:06 T-E700F-1 passwd[18515]: pam_ldap: ldap_simple_bind Can't contact LDAP server May 9 10:49:06 T-E700F-1 passwd[18515]: pam_ldap: reconnecting to LDAP server... May 9 10:49:06 T-E700F-1 passwd[18515]: pam_ldap: ldap_simple_bind Can't contact LDAP server getent passwd |grep sobrien4 (note keeping short since testing with that account, however it outputs all ldap users): sobrien4:Ffm1oHzwnLz0U:10000:12001:Sean O'Brien:/home/sobrien4:/bin/bash getent group shows all ldap groups. /etc/pam.d/common-password (Note this is just the most current, I have tried a lot of different options): password required pam_cracklib.so retry=3 minlen=8 difok=3 password [success=1 default=ignore] pam_unix.so use_authtok md5 password required pam_ldap.so use_authtok password required pam_permit.so Popped open wireshark as well, the server & client are talking. I have the password changing working on the server. I.E. the server that runs slapd, I can log in with the ldap user and change the passwords. I tried copying the working configs from the server initially and no dice. I also tried cloning it, and just changing ip & host, and no go. My guess is that the client is not authorized by ip or hostname to change a pass. Pertaining to the slapd conf, I saw this in a guide and tried it: access to attrs=loginShell,gecos by dn="cn=admin,dc=cengineering,dc=etb" write by self write by * read access to * by dn="cn=admin,dc=cengineering,dc=etb" write by self write by * read So ldap seems to be working okay, just can't change the password.

    Read the article

  • Why are my Windows 7 updates continuously failing?

    - by Chris C.
    I'm an advanced level user here with an odd issue. I have two Windows Updates that are failing to install, every single time. I'm getting a mysterious "Code 1" error on both updates, an error for which I'm having difficulty finding a solution. The updates in question are: Security Update for Microsoft Visual C++ 2008 Service Pack 1 Redistributable Package (KB2538243) System Update Readiness Tool for Windows 7 for x64-based Systems (KB947821) [May 2011] Because these updates are failing, the Shut Down button in my start menu always has the shield icon next to it, indicating that "new" updates will be installed on shut down. But, of course, they'll fail and when the PC is restarted, the shield icon is still there. When checking the update history and viewing the details of the failed updates, I get the following: Security Update for Microsoft Visual C++ 2008 Service Pack 1 Redistributable Package (KB2538243) Installation date: ?6/?29/?2011 3:00 AM Installation status: Failed Error details: Code 1 Update type: Important A security issue has been identified leading to MFC application vulnerability in DLL planting due to MFC not specifying the full path to system/localization DLLs. You can protect your computer by installing this update from Microsoft. After you install this item, you may have to restart your computer. More information: http://go.microsoft.com/fwlink/?LinkId=216803 System Update Readiness Tool for Windows 7 for x64-based Systems (KB947821) [May 2011] Installation date: ?6/?28/?2011 3:00 AM Installation status: Failed Error details: Code 1 Update type: Important This tool is being offered because an inconsistency was found in the Windows servicing store which may prevent the successful installation of future updates, service packs, and software. This tool checks your computer for such inconsistencies and tries to resolve issues if found. More information: http://support.microsoft.com/kb/947821 About My System I'm running Windows 7 Home Premium x64 Edition. This is a custom PC build and the OS was installed fresh, not an upgrade from a previous version. I've been running this system for about 4 months. Windows Updates aside, the system is usually quite stable. Thanks in advance for your help!

    Read the article

  • PHP unable to load dynamic library... but it is loaded. WTF?

    - by Josh
    I have a CentOS 5.4 server with Apache 2.2.3, mod_fastcgi and PHP 5.2.11. Everything is working great, server is a production server and I've had no complaints. When I was looking in PHP's error log to diagnose a problem with a specific page, I saw the following periodically repeated over and over: [03-May-2010 19:54:12] PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/apc.so' - /usr/lib/php/modules/apc.so: undefined symbol: php_rfc1867_callback in Unknown on line 0 [03-May-2010 19:54:12] PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/pdo_mysql.so' - /usr/lib/php/modules/pdo_mysql.so: undefined symbol: php_pdo_get_dbh_ce in Unknown on line 0 [03-May-2010 19:54:12] PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/apc.so' - /usr/lib/php/modules/apc.so: undefined symbol: php_rfc1867_callback in Unknown on line 0 [03-May-2010 19:54:12] PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/pdo_mysql.so' - /usr/lib/php/modules/pdo_mysql.so: undefined symbol: php_pdo_get_dbh_ce in Unknown on line 0 I am very confused because APC is loaded, I can see it in the output of phpinfo(). And by the look of these error messages nothing should be working at all. Yet sites seem to be running fine. How can I track down the source of this error?

    Read the article

  • How can I debug user mode driver failures in Windows 8

    - by Tom
    I have a 32 GB SD Card. Whenever I insert this card in to my newly upgraded Windows 8 laptop the OS stops responding normally. Metro Apps won't work. The system may or may not log in. Desktop apps may or may not be able to do things. When I remove the card and restart then all is fine. As soon as I put the card back in, the system starts misbehaving again. I've run Windows Update, so I have the latest drivers from Microsoft. This does not occur with the 8 GB cards I have. Unfortunately I only have one 32 GB card, so I can't test with others. From examining the system event log I've determined this is happening due to a user mode driver failure. How can I best debug this issue from here? How can I figure out which driver this is related to? Will there be a Dr. Watson crash dump somewhere? Details - System - Provider [ Name] Microsoft-Windows-DriverFrameworks-UserMode [ Guid] {2E35AAEB-857F-4BEB-A418-2E6C0E54D988} EventID 10110 Version 1 Level 1 Task 64 Opcode 0 Keywords 0x2000000000000000 - TimeCreated [ SystemTime] 2012-10-29T00:51:57.532718300Z EventRecordID 40417 Correlation - Execution [ ProcessID] 1056 [ ThreadID] 3796 Channel System Computer thebrain - Security [ UserID] S-1-5-18 - UserData - UMDFHostProblem [ lifetime] {811E3DC4-FBC6-420B-ABCC-AD7505A36F3B} - Problem [ code] 3 [ detectedBy] 2 ExitCode 3 - Operation [ code] 259 Message 72448 Status 4294967295 Edit 1 So I tried using Debug View from SysInternals (you can get it here: http://technet.microsoft.com/en-us/sysinternals/bb896647.aspx). That gave me this information: which is not especially helpful. Then I tried connecting WinDbg to WUDFHost.exe (the process that seems to host user mode drivers) to see if it could catch the error. Get it here: http://msdn.microsoft.com/en-US/windows/hardware/hh852363 Instructions: http://msdn.microsoft.com/en-US/library/windows/hardware/ff554716(v=vs.85).aspx That didn't help much. It didn't catch any exceptions as I'd hoped (which would point me to the cause of the crash at least). Here's the stack of one of the threads:

    Read the article

  • MSSQLSERVER Will Not Start - Event ID 913 and 1814

    - by ThaKidd
    Hello ServerFault! I need some serious help. I have a major database server down and am scratching my head at how to fix it. The server was hit by rolling black outs last week in Dallas and sense then, Microsoft SQL 2005 SP2 will not start up. I am getting the following errors (both when starting the service and while trying to execute mssqlsrv.exe -c -f -m: Event Type: Error Event Source: MSSQLSERVER Event ID: 913 Could not find database ID 3. Database may not be activated yet or may be in transition. Reissue the query once the database is available. If you do not think this error is due to a database that is transitioning its state and this error continues to occur, contact your primary support provider. Please have available for review the Microsoft SQL Server error log and any additional information relevant to the circumstances when the error occurred. and... Event Type: Information Event Source: MSSQLSERVER Event ID: 1814 Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive and then restart SQL Server. Check for additional errors in the event log that may indicate why the tempdb files could not be initialized. I have tried to rename the tempdb.mdf to tempdb.old with no success. I have checked and have 193 GB of free hard drive space. What else might cause this problem? Could the server need a chkdsk ran on it or do I need to be looking at some area of the database server? Any help is greatly appreciated. Thank you in advance.

    Read the article

  • Optimal Networking Setup for a 2-Story unit?

    - by user29336
    I am moving into a 4 bedroom two-story unit. It’s roughly 2,200 sq ft. I want absolute max throughput possible to be achieved in all focal points. We’re all in internet related industries. Between gaming and web-development latency and throughput are major factors for us. Here’s our main focal points: 1) Garage (office). downstairs 2) Each bedroom x4. upstairs 3) Living room. downstairs The fastest line we can get is Comcast 50mbdown/5up (Wideband). I am looking for the best way to achieve wireless and wired performance for our setup. Our gaming computers may be in our bedroom, and we also may bring it down to the office every now and then for “LAN” sessions. Most wireless will be happening downstairs with our laptops, but since we may do LAN sessions then hard wired latency may be important there too. My concerns: If we do only wireless there would be too much latency for gaming. I don’t know if placing one D-link DGL 4500 on the top floor would be enough; which I currently own. (http://dlink.com/us/en/home-solutions/support/product/dgl-4500-xtreme-n-gaming-router) As far as I’m aware wireless signals transfer best top down. Would this wireless router be enough on top floor and that’s it? My second strategy was a combination of wiring and wireless but I’m not sure what’s easiest way to do this? This is a place we’re renting, so I’m not sure how much leeway we have with wiring, but we’re all pretty competent... if we can’t drill through a wall we can probably “stitch” them across the edges wherever needed. Thoughts on the optimal way to do this?

    Read the article

  • Use Apache authentication + authorization to control access to Subversion subdirectories

    - by Stefan Lasiewski
    I have a single SVN repo at /var/svn/ with a few subdirectories. Staff must be able to access the top-level directory and all subdirectories within it, but I want to restrict access to subdirectories using alternate htpasswd files. This works for our Staff. <Location /> DAV svn SVNParentPath /var/svn AuthType Basic AuthBasicProvider ldap # mod_authnz_ldap AuthzLDAPAuthoritative off AuthLDAPURL "ldap.example.org:636/ou=people,ou=Unit,ou=Host,o=ldapsvc,dc=example,dc=org?uid?sub?(objectClass=PosixAccount)" AuthLDAPGroupAttribute memberUid AuthLDAPGroupAttributeIsDN off Require ldap-group cn=staff,ou=PosixGroup,ou=Unit,ou=Host,o=ldapsvc,dc=example,dc=org </Location> Now, I am trying to restrict access to a subdirectory with a separate htpasswd file, like this: <Location /customerA> DAV svn SVNParentPath /var/svn # mod_authn_file AuthType Basic AuthBasicProvider file AuthUserFile /usr/local/etc/apache22/htpasswd.customerA Require user customerA </Location> I can use Firefox and curl to browse to this folder fine: curl https://svn.example.org/customerA/ --user customerA:password But I cannot use check out this SVN repository: $ svn co https://svn.example.org/customerA/ svn: Repository moved permanently to 'https://svn.example.org/customerA/'; please relocate And on the server logs, I get this strange error: # httpd-access.log 192.168.19.13 - - [03/May/2010:16:40:00 -0700] "OPTIONS /customerA HTTP/1.1" 401 401 192.168.19.13 - customerA [03/May/2010:16:40:00 -0700] "OPTIONS /customerA HTTP/1.1" 301 244 # httpd-error.log [Mon May 03 16:40:00 2010] [error] [client 192.168.19.13] Could not fetch resource information. [301, #0] [Mon May 03 16:40:00 2010] [error] [client 192.168.19.13] Requests for a collection must have a trailing slash on the URI. [301, #0] My question: Can I restrict access to Subversion subdirectories using Apache access controls? DocumentRoot is commented out, so it's not clear that the FAQ at http://subversion.apache.org/faq.html#http-301-error applies.

    Read the article

  • My URL has been identified as a phishing site

    - by user2118559
    Some months before ordered VPS at Ramnode According to tutorial (ZPanelCP on CentOS 6.4) http://www.zvps.co.uk/zpanelcp/centos-6 Installed CentOS and ZPanel) Today received email We are requesting that you secure and investigate the phishing website identified below. This URL has been identified as a phishing site and is currently involved in identity theft activities. URL: hxxp://111.11.111.111/www.connet-itunes.fr/iTunesConnect.woasp/ //IP is modified (not real) This site is being used to display false or spoofed content in an apparent effort to steal personal and financial information. This matter is URGENT. We believe that individuals are being falsely directed to this page and may be persuaded into divulging personal information to a criminal, if the content is not immediately disabled. Trying to understand. Some hacker hacked VPS, placed some file (?) with content that redirects to www.connet-itunes.fr/iTunesConnect.woasp? Then questions 1) how can I find the file? Where it may be located? url is URL: hxxp://111.11.111.111/ IP address, not domain name 2) What to do to protect VPS (with CentOS)? Any tutorial? Where may be security problem? I mean may be someone faced something similar....

    Read the article

  • checksecurity / setuid changes, is this a bug or did somebody break in?

    - by Fabian Zeindl
    I received a mail by checksecurity from my ubuntu 12.04 server with the following content: --- setuid.today 2012-06-03 06:48:09.892436281 +0200 +++ /var/log/setuid/setuid.new.tmp 2012-06-17 06:47:51.376597730 +0200 @@ -30,2 +30,2 @@ - 131904 4755 2 root root 71280 Wed May 16 07:23:08.0000000000 2012 ./usr/bin/sudo - 131904 4755 2 root root 71280 Wed May 16 07:23:08.0000000000 2012 ./usr/bin/sudoedit + 143967 4755 2 root root 71288 Fri Jun 1 05:53:44.0000000000 2012 ./usr/bin/sudo + 143967 4755 2 root root 71288 Fri Jun 1 05:53:44.0000000000 2012 ./usr/bin/sudoedit @@ -42 +42 @@ - 130507 666 1 root root 0 Sat Jun 2 18:04:57.0752979385 2012 ./var/spool/postfix/dev/urandom + 130507 666 1 root root 0 Mon Jun 11 08:47:16.0919802556 2012 ./var/spool/postfix/dev/urandom First i was worried, then i realized that the change was actually 2 weeks ago, i think there was a sudo-update back then. Since checksecurity runs in /etc/cron.daily i wondered why i only get that email now. I looked into /var/log/setuid/ and found the following files: total 32 -rw-r----- 1 root adm 816 Jun 17 06:47 setuid.changes -rw-r----- 1 root adm 228 Jun 3 06:48 setuid.changes.1.gz -rw-r----- 1 root adm 328 May 27 06:47 setuid.changes.2.gz -rw-r----- 1 root root 1248 May 20 06:47 setuid.changes.3.gz -rw-r----- 1 root adm 4473 Jun 17 06:47 setuid.today -rw-r----- 1 root adm 4473 Jun 3 06:48 setuid.yesterday The obvious thing that confuses me is that the file setuid.yesterday is not from yesterday = Jun/16. Is this a bug?

    Read the article

  • How can I format an SD card with a more robust Linux-usable filesystem with a specific cluster size for better write performace?

    - by Harvey
    Goal: microSD card formatted... for best write performance for use only with embedded Linux for better reliability (random power failures may occur) using an 64kB cluster size I'm using an 8GB microSD card for data storage inside an embedded Linux/ARM device. The SD card is not removable. I've been using ext3 instead of the pre-installed FAT32 because it seems to better handle random power failures during writes. However, I kept noticing that my write performance is always best with the pre-installed FAT32 from Kingston. If I reformat the card with FAT32, the performance still suffers. After browsing wikipedia, I stumbled upon the following comment saying that some cards are optimized for specific cluster sizes. In my case, the Kingston comes pre-formatted for an 64kB cluster size. Risks of reformatting Reformatting an SD card with a different file system, or even with the same one, may make the card slower, or shorten its lifespan. Some cards use wear leveling, in which frequently modified blocks are mapped to different portions of memory at different times, and some wear-leveling algorithms are designed for the access patterns typical of the file allocation table on a FAT16 or FAT32 device.[60] In addition, the preformatted file system may use a cluster size that matches the erase region of the physical memory on the card; reformatting may change the cluster size and make writes less efficient.

    Read the article

  • EventID 1058 Code 5, Sysvol is subdir of Sysvol - how to fix?

    - by nulliusinverba
    I have been trying to resolve this error, like many others: The processing of Group Policy failed. Windows attempted to read the file \domain.local\SysVol\domain.local\Policies{3EF90CE1-6908-44EC-A750-F0BA70548600}\gpt.ini from a domain controller and was not successful. Group Policy settings may not be applied until this event is resolved. This issue may be transient and could be caused by one or more of the following: a) Name Resolution/Network Connectivity to the current domain controller. b) File Replication Service Latency (a file created on another domain controller has not replicated to the current domain controller). c) The Distributed File System (DFS) client has been disabled. Error code: 5 = Access Denied. The incredibly helpful post is this one (http://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Server/2003_Server/A_1073-Diagnosing-and-repairing-Events-1030-and-1058.html). Quoting from this post: HERE IS A LIST OF POTENTIAL PROBLEMS THAT CAN LEAD TO 1030 AND 1058 EVENT ERRORS: --Sometimes the permissions of the file folders that contain Group policies (the Sysvol folder) can be corrupted. --Sometimes you have problems with NetBIOS: --Sometimes the GPO itself is corrupt, or you have a partial set of data for that GPO. --Sometimes you may have problems with File Replication Services, which almost always indicates a problem with DNS --Sysvol may be a subfolder of itself: Sysvol/Sysvol I have the problem listed where sysvol is a subfolder of sysvol. The directory structure is: -sysvol -domain -staging -staging areas -sysvol (shared as "\\server\sysvol") -domain.local -ClientAgent -Policies -scripts Interestingly, the second sysvol folder is the one that is shared as "\server\sysvol". This makes me confident this is the issue with the permissions and error code 5. Also interestingly, my server 2008 R2 servers can see it fine - my server 2008 servers cannot, and get the error. This is consistent across all my servers. This latter fact makes me uncertain what I need to do to fix this up. Do I, e.g., simply move the shared sysvol folder up a level to replace the non-shared one? Any help greatly appreciated. Cheers, Tim.

    Read the article

  • Find and Replace String in filenames

    - by shekhar
    I have thousands of files with no specific extensions. What I need to do is to search for a sting in filename and replace with other string and further search for second string and replace with any other string and so on. I.e.: I have multiple strings to replace with other multiple strings. It may be like: "abc" in filename replaced with "def" * String "abc" may be in many files "jkl" in filename replaced with "srt" * String "jkl" may be in many files "pqr" in filename replaced with "xyz" * String "pqr" may be in many files I am currently using excel macro to get the file names in excel and then preserving original names in one column and replacing desired in the content copied in other column. then I create a batch file for the same. Like: rename Path\OriginalName1 NewName1 rename Path\OriginalName2 NewName2 Problem with the above procedure is that it takes a lot of time as the files are many. And As I am using excel 2003 there is limitation on number of rows as well. I need a script in batch like: replacestr abc with def replacestr pqr with xyz in a single directory. Will it be better to do in unix script?

    Read the article

  • How browsers handle multiple IPs

    - by Sandman4
    Can someone direct me to information on exact browsers behavior when browser gets multiple A records for a given hostname (say ip1 and ip2), and one of them is not accessible. I interested in EXACT details, like (but not limited to): Will browser get 2 IPs from OS, or it will get only one ? Which ip will browser try first (random or always the first one) ? Now, let's say browser started with the failed ip1 For how long will browser try ip1 ? If user hits "stop" while it waits for ip1, and then clicks refresh which IP will browser try ? What will happen when it times-out - will it start trying ip2 or give error ? (And if error, which ip will browser try when user clicks refresh). When user clicks refresh, will any browser attempt new DNS lookup ? Now let's assume browser tried working ip2 first. For the next page request, will browser still use ip2, or it may randomly switch ips ? For how long browsers keep IPs in their cache ? When browsers sends a new DNS request, and get SAME ips, will it CONTINUE to use the same known-to-be-working IP, or the process starts from scratch and it may try any of the two ? Of course it all may be browser dependent, and may also vary between versions and platforms, I'd be happy to have maximum of details. The purpose of this - I'm trying to understand what exactly users will experience when round-robin DNS based used and one of the hosts fails. Please, I'm NOT asking about how bad DNS load balancing is, and please refrain from answering "don't do it", "it's a bad idea", "you need heartbeat/proxy/BGP/whatever" and so on.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >