Search Results

Search found 6397 results on 256 pages for 'ssh agent'.

Page 217/256 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • Unable to connect to mail server via IMAP and roundcube

    - by mrhatter
    I am having trouble getting the final parts of my mail server up and working. I followed this tutorial to get everything set up on the mail server side. I have installed roundcube for webmail and configured it but it is saying "error connecting, connection refused" when attempting to connect to it using IMAP. This is thorough the "test imap" section of its installer. Also it is giving me an error message about perissions for it's log and temp folders but that's not as important as acutally getting mail to work. I have also tried connecting to the mail server using thunderbird however it cannot establish a connection either and I know my login information is correct. I know that the databases are working correctly based on the roundcube installer telling me that they have been "successfully initialized". Here are my firewall rules -A INPUT -i lo -j ACCEPT -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m tcp --dport 25 -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT -A INPUT -p tcp -m tcp --dport 465 -j ACCEPT -A INPUT -p tcp -m tcp --dport 487 -j ACCEPT -A OUTPUT -p tcp -m tcp --dport 993 -j ACCEPT -A INPUT -j DROP Which I set up in iptables. I have modified them from what I used in this tutorial I'm not sure what to try next. Any help would be wonderful! I am using Ubuntu 14.04 server, apache 2.4.7, roundcube 1.0.1, and the latest versions of dovecot and postfix. The email databases are contained in mysql. I am running this on a VPS server. UPDATE: I have changed from iptables to using ufw. I have run the following commands to set up a basic firewall with ufw. ufw default deny ufw allow ssh ufw allow http ufw allow https ufw allow imap ufw allow imaps ufw allow smtp I then used telnet to check all of the mail ports. But Port 993 isnt working even though ufw says both 993 and 993/tcp are open. What am I missing?

    Read the article

  • What 5 things should SQL Server get rid of?

    - by BuckWoody
    I’ve been “tagged” by my friend Paul Randal. It’s a high-tech way of making someone else do what you want, but since it’s Paul, well, I guess I’m OK with that. He’s asked in his recent blog entry “What five things would you get rid of in SQL Server if you were in charge?” This is, of course, a delicate issue. After all, I work at Microsoft, so anything I say here might be taken as a criticism that would require action – but of course it really doesn’t. Interestingly, you may have more to do with what goes in to SQL Server than I did even as a Program Manager where I “owned” a feature. Unlike many places I’ve worked, Microsoft really does drive its products by what its users want – not every time, and not every user request, mind you, but overall I think we hit the mark pretty well. So, with all of that said, and of course the obligatory statement of “these are my own opinions, and have nothing to do with any official Microsoft position in any way, and do not reflect the opinions of other Microsoft employees or management”, here goes. 1. Get rid of SQL Server Management Studio Does that surprise you? After all, when I was a Program Manager, I actually owned the general architecture for SSMS. But those on my team probably would have been able to guess this one for you. I think that SSMS is a fine development tool. But I think that it does less of a good job for managing a system. It’s based on Visual Studio, probably one of the best development IDE’s around. And when I develop code, I really like it. But for a monitoring/management tool, I prefer a snap-in to the Microsoft Management Console (MMC). I know, the old one (prior to 3.0) was kludgy, difficult to use and program in. But that’s changed. Of course, when I bring this up, you’ll probably immediately say “But I don’t have that in XP.” And that’s one of the reasons we didn’t go there. (But I still don’t like SSMS for management.) 2. ShrinkDB I think this discussion has been done to death, so I’ll leave it at that. 3. SQL Server Agent Does that one surprise you as well? In my mind, since we ALWAYS ride on Windows, just use the task scheduler there, along with PowerShell. You could log the results in Windows logs, files, back into SQL Server, whatever. It’s just a complexity we don’t need in SQL Server. 4. SQL Server Error Logs We have a full logging setup in Windows. They’re well done, easy to understand and ubiquitous. We should just use that. 5. Several SKU’s I won’t say which, but we have a few SKU’s of SQL Server that need to go. And we need to figure out how to help you understand clearly where you need to go to Enterprise or Data Center.  Most folks are trying to push Standard edition to do things it isn’t designed to do, and then they think SQL Server won’t scale. I think we can do a better job of showing you where Standard Edition will hit the wall, and I think with fewer choices it would be pretty simple for you to pick the right one. Well, once again I’ve probably puzzled some folks and angered others. I think my work here is done. :) Back to you, Paul. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How to force a clock update using ntp?

    - by ysap
    I am running Ubuntu on an ARM based embedded system that lacks a battery backed RTC. The wake-up time is somewhere during 1970. Thus, I use the NTP service to update the time to the current time. I added the following line to /etc/rc.local file: sudo ntpdate -s time.nist.gov However, after startup, it still takes a couple of minutes until the time is updated, during which period I cannot work effectively with tar and make. How can I force a clock update at any given time? UPDATE 1: The following (thanks to Eric and Stephan) works fine from command line, but fails to update the clock when put in /etc/rc.local: $ date ; sudo service ntp stop ; sudo ntpdate -s time.nist.gov ; sudo service ntp start ; date Thu Jan 1 00:00:58 UTC 1970 * Stopping NTP server ntpd [ OK ] * Starting NTP server [ OK ] Thu Feb 14 18:52:21 UTC 2013 What am I doing wrong? UPDATE 2: I tried following the few suggestions that came in response to the 1st update, but nothing seems to actually do the job as required. Here's what I tried: Replace the server to us.pool.ntp.org Use explicit paths to the programs Remove the ntp service altogether and leave just sudo ntpdate ... in rc.local Remove the sudo from the above command in rc.local Using the above, the machine still starts at 1970. However, when doing this from command line once logged in (via ssh), the clock gets updated as soon as I invoke ntpdate. Last thing I did was to remove that from rc.local and place a call to ntpdate in my .bashrc file. This does update the clock as expected, and I get the true current time once the command prompt is available. However, this means that if the machine is turned on and no user is logged in, then the time never gets updates. I can, of course, reinstall the ntp service so at least the clock is updated within a few minutes from startup, but then we're back at square 1. So, is there a reason why placing the ntpdate command in rc.local does not perform the required task, while doing so in .bashrc works fine?

    Read the article

  • Best way to remote restart Ubuntu from Windows machine

    - by robsoft
    Background: I'm looking to put a series of Ubuntu machines into retail locations, they're being used as dumb kiosks to show a series of slides onto large LCD panel TV screens. Once installed, they won't have a keyboard or mouse connected but will have a fixed IP on the local network. Everything is configured to auto-start, no automatic updates, no power saving etc - I think we're pretty-much good to go apart from one thing. I need the retail staff to be able to restart the boxes if a problem arises. We have VNC running (now that we've turned off desktop enhancements!) so that we can remotely get into the machines if we need to, but that's not something we would allow the retail staff to do. The machines are going to be physically 'out of the way' (probably in the ceiling space) so the power button is not easily accessible!. I'd like to have some means of allowing the retail staff to restart the Ubuntu machine, from the desktop of one of their Windows terminals. I don't really want to give them some kind of raw terminal access (the command line will frighten them!) and I don't want them to use VNC (as stated above). Ideally there would be an icon on the Windows desktop, they double-click it, reply to a simple 'are you sure?' prompt, and then the Ubuntu box is told to restart. The Windows side of that won't be a problem, we can write something using Delphi, Python & Qt4, whatever - it's the Ubuntu side of it I'm stuck with. Out of sight/view, could I have a Windows program open a terminal across the network and tell Ubuntu to restart? Is this what SSH could be used for (I have never set that kind of thing up). The Windows programming side isn't really an issue, it's just that I'm a total Ubuntu noob and don't know where to start from the platform point of view. The other thing we considered is also having the machine automatically restart itself at a set time each day (obviously out of store hours!). To me, that seems a bit unnecessary (though forcing a restart once a week/month might be worthwhile). Any thoughts or suggestions? Being able to restart the box on demand across the network is my prime requirement.

    Read the article

  • Perfect End to a Bad Day

    - by TehGrumpyCoder
    Yesterday's post about A Bad Day at Work actually had an addendum to it. There were apparently a bunch of guys on ice skates last night competing in some sport way the hell and gone over on the other side of the valley, and enough people couldn't live without seeing them that they had all major arteries heading west honked. I mean honked... the traffic guy reported the 101 had 16 miles of backup... yikes. Since I worked downtown for a number of years, my fallback is to cut across the city on surface streets to get to one of my old 'haunts' and just drive it home from there. Of course with the 101 backed up, then I17 would logically be as well, so I kept the news on rather than my Zune and heard where the bad stuff was going North. I popped out on the freeway about 7 miles south of my exit. Got to the exit which is about a mile from the house without killing or maiming me or anyone else. Waited patiently at the light in the inside lane to make a left and go under the freeway proceeding West. The light changed, I had full green, I started through and whoa... I've got someone in a little rat car crossing my bow! A little explanation... I drive a 3/4 ton pickup with a V-10, extended cab and shell on the back. It's not jacked up, but it sits up pretty good and is longer than any parking place I've ever tried to put it into. I consider this truck to be the consolation prize for paying uninsured motorist coverage for 45 years and having Pilar Martinez totally destroy a 3/4 ton Silverado on March 1, 2007 by plowing into me at traffic speed while I was stopped at a light. If you pay for uninsured motorist coverage, ask your insurance agent *exactly* what that means... I bet it's different than what you think it means. But I digress, sorry... So here I am with a car that is shorter from top to road than the hood on my truck, and the driver thought it would be safe to run a red light and see if they could get past me before I got into the lane. The right side of my front bumper was almost into the driver's window when I hit the brakes and wheeled it left. Fortunately for all involved, I saw it soon enough, and pulled into the 2nd lane for making a left to go back South. I looked in my mirror, signalled a move, then moved over behind the yuck in the rat car. I then punched it, and the future hood ornament and I both made it through the next light. I pulled alongside to let her know that she was DEFINITELY Number 1 in my book, and it's a middle-age woman looking at me with a "sorry, it was an accident" show of pouty face and arms held up. Tough $hit lady... that may have worked when you were 18, but it's not working anymore, and it wasn't an accident... you ran a freakin' red light and almost got yourself killed. That just about put a bow on the day... I was home later than usual, pissed off about work stuff, pissed off at traffic, and now that. I ate dinner, watched a little TV, and was asleep about 9:30 exhausted. Hope today is better.

    Read the article

  • Consumer Oriented Search In Oracle Endeca Information Discovery - Part 2

    - by Bob Zurek
    As discussed in my last blog posting on this topic, Information Discovery, a core capability of the Oracle Endeca Information Discovery solution enables businesses to search, discover and navigate through a wide variety of big data including structured, unstructured and semi-structured data. With search as a core advanced capabilities of our product it is important to understand some of the key differences and capabilities in the underlying data store of Oracle Endeca Information Discovery and that is our Endeca Server. In the last post on this subject, we talked about Exploratory Search capabilities along with support for cascading relevance. Additional search capabilities in the Endeca Server, which differentiate from simple keyword based "search boxes" in other Information Discovery products also include: The Endeca Server Supports Set Search.  The Endeca Server is organized around set retrieval, which means that it looks at groups of results (all the documents that match a search), as well as the relationship of each individual result to the set. Other approaches only compute the relevance of a document by comparing the document to the search query – not by comparing the document to all the others. For example, a search for “U.S.” in another approach might match to the title of a document and get a high ranking. But what if it were a collection of government documents in which “U.S.” appeared in many titles, making that clue less meaningful? A set analysis would reveal this and be used to adjust relevance accordingly. The Endeca Server Supports Second-Order Relvance. Unlike simple search interfaces in traditional BI tools, which provide limited relevance ranking, such as a list of results based on key word matching, Endeca enables users to determine the most salient terms to divide up the result. Determining this second-order relevance is the key to providing effective guidance. Support for Queries and Filters. Search is the most common query type, but hardly complete, and users need to express a wide range of queries. Oracle Endeca Information Discovery also includes navigation, interactive visualizations, analytics, range filters, geospatial filters, and other query types that are more commonly associated with BI tools. Unlike other approaches, these queries operate across structured, semi-structured and unstructured content stored in the Endeca Server. Furthermore, this set is easily extensible because the core engine allows for pluggable features to be added. Like a search engine, queries are answered with a results list, ranked to put the most likely matches first. Unlike “black box” relevance solutions, which generalize one strategy for everyone, we believe that optimal relevance strategies vary across domains. Therefore, it provides line-of-business owners with a set of relevance modules that let them tune the best results based on their content. The Endeca Server query result sets are summarized, which gives users guidance on how to refine and explore further. Summaries include Guided Navigation® (a form of faceted search), maps, charts, graphs, tag clouds, concept clusters, and clarification dialogs. Users don’t explicitly ask for these summaries; Oracle Endeca Information Discovery analytic applications provide the right ones, based on configurable controls and rules. For example, the analytic application might guide a procurement agent filtering for in-stock parts by visualizing the results on a map and calculating their average fulfillment time. Furthermore, the user can interact with summaries and filters without resorting to writing complex SQL queries. The user can simply just click to add filters. Within Oracle Endeca Information Discovery, all parts of the summaries are clickable and searchable. We are living in a search driven society where business users really seem to enjoy entering information into a search box. We do this everyday as consumers and therefore, we have gotten used to looking for that box. However, the key to getting the right results is to guide that user in a way that provides additional Discovery, beyond what they may have anticipated. This is why these important and advanced features of search inside the Endeca Server have been so important. They have helped to guide our great customers to success. 

    Read the article

  • Problems uploading package to launchpad

    - by user74513
    I'm having a lot of problems uploading my showdown project to a PPA. I've setup correctly PGP keys and my public ssh key to launchpad. I've packaged with debuild my C++ project, producing a source package lintian gave me only those two warnings that I think are ok for the showdown rules: W: massren source: native-package-with-dash-version W: massren source: binary-nmu-debian-revision-in-source 1.0-0extras12.04.1~ppa2 Producing a binary package works to and the package installs without problem on my ubuntu 12.04 machine, I only have a few more lintian warnings about the fact I'm installing in /opt/extras.ubuntu.com/ I'm uploading with: dput ppa:gabrielegreco/massren massren_1.0-0extras12.04.1~ppa2_source.changes When I upload with dput I have no errors, signatures seems ok, and public key seems accepted to (since the upload goes on without asking passwords...): dput ppa:gabrielegreco/massren massren_1.0-0extras12.04.1~ppa2_source.changes Checking signature on .changes gpg: Signature made Mon 02 Jul 2012 10:00:38 AM CEST using RSA key ID 49982576 gpg: Good signature from "Gabriele Greco " Good signature on /home/gabry/no-backup/massren_1.0-0extras12.04.1~ppa2_source.changes. Checking signature on .dsc gpg: Signature made Mon 02 Jul 2012 10:00:33 AM CEST using RSA key ID 49982576 gpg: Good signature from "Gabriele Greco " Good signature on /home/gabry/no-backup/massren_1.0-0extras12.04.1~ppa2.dsc. Uploading to ppa (via ftp to ppa.launchpad.net): Uploading massren_1.0-0extras12.04.1~ppa2.dsc: done. Uploading massren_1.0-0extras12.04.1~ppa2.tar.gz: done. Uploading massren_1.0-0extras12.04.1~ppa2_source.changes: done. Successfully uploaded packages. At the moment I'm not receiving responses from launchpad site, but the upload does not show in the ppa page. Previous attempts gave me response e-mails with different kind of errors: File massren_1.0-0extras12.04.1~ppa1.tar.gz mentioned in the changes has a checksum mismatch. 1503fa155226cbc4aba2f8ba9aa11a75 != 294a5e0caf3fe95b0b007a10766e9672 File massren_1.0-0extras12.04.1~ppa1.tar.gz mentioned in the changes has a checksum mismatch. 1503fa155226cbc4aba2f8ba9aa11a75 != 294a5e0caf3fe95b0b007a10766e9672 Or more cryptic: GPG verification of /srv/launchpad.net/ppa-queue/incoming/upload-ftp-20120629-163320-001135/~gabrielegreco/massren/ubuntu/massren_1.0-0extras12.04.1~ppa1.dsc failed: Verification failed 3 times: ["(7, 58, u'No data')", "(7, 58, u'No data')", "(7, 58, u'No data')"] Further error processing not possible because of a critical previous error. Any idea how can I solve this problem? I'm new to ubuntu packaging, so I may miss some step... There is an alternative to dput (aka manual upload)?

    Read the article

  • Server-Sent Events using GlassFish (TOTD #179)

    - by arungupta
    Bhakti blogged about Server-Sent Events on GlassFish and I've been planning to try it out for past some days. Finally, I took some time out today to learn about it and build a simplistic example showcasing the touch points. Server-Sent Events is developed as part of HTML5 specification and provides push notifications from a server to a browser client in the form of DOM events. It is defined as a cross-browser JavaScript API called EventSource. The client creates an EventSource by requesting a particular URL and registers an onmessage event listener to receive the event notifications. This can be done as shown var url = 'http://' + document.location.host + '/glassfish-sse/simple';eventSource = new EventSource(url);eventSource.onmessage = function (event) { var theParagraph = document.createElement('p'); theParagraph.innerHTML = event.data.toString(); document.body.appendChild(theParagraph);} This code subscribes to a URL, receives the data in the event listener, adds it to a HTML paragraph element, and displays it in the document. This is where you'll parse JSON and other processing to display if some other data format is received from the URL. The URL to which the EventSource is subscribed to is updated on the server side and there are multipe ways to do that. GlassFish 4.0 provide support for Server-Sent Events and it can be achieved registering a handler as shown below: @ServerSentEvent("/simple")public class MySimpleHandler extends ServerSentEventHandler { public void sendMessage(String data) { try { connection.sendMessage(data); } catch (IOException ex) { . . . } }} And then events can be sent to this handler using a singleton session bean as shown: @Startup@Statelesspublic class SimpleEvent { @Inject @ServerSentEventContext("/simple") ServerSentEventHandlerContext<MySimpleHandler> simpleHandlers; @Schedule(hour="*", minute="*", second="*/10") public void sendDate() { for(MySimpleHandler handler : simpleHandlers.getHandlers()) { handler.sendMessage(new Date().toString()); } }} This stateless session bean injects ServerSentEventHandlers listening on "/simple" path. Note, there may be multiple handlers listening on this path. The sendDate method triggers every 10 seconds and send the current timestamp to all the handlers. The client side browser simply displays the string. The HTTP request headers look like: Accept: text/event-streamAccept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3Accept-Encoding: gzip,deflate,sdchAccept-Language: en-US,en;q=0.8Cache-Control: no-cacheConnection: keep-aliveCookie: JSESSIONID=97ff28773ea6a085e11131acf47bHost: localhost:8080Referer: http://localhost:8080/glassfish-sse/faces/index2.xhtmlUser-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.54 Safari/536.5 And the response headers as: Content-Type: text/event-streamDate: Thu, 14 Jun 2012 21:16:10 GMTServer: GlassFish Server Open Source Edition 4.0Transfer-Encoding: chunkedX-Powered-By: Servlet/3.0 JSP/2.2 (GlassFish Server Open Source Edition 4.0 Java/Apple Inc./1.6) Notice, the MIME type of the messages from server to the client is text/event-stream and that is defined by the specification. The code in Bhakti's blog can be further simplified by using the recently-introduced Twitter API for Java as shown below: @Schedule(hour="*", minute="*", second="*/10") public void sendTweets() { for(MyTwitterHandler handler : twitterHandler.getHandlers()) { String result = twitter.search("glassfish", String.class); handler.sendMessage(result); }} The complete source explained in this blog can be downloaded here and tried on GlassFish 4.0 build 34. The latest promoted build can be downloaded from here and the complete source code for the API and implementation is here. I tried this sample on Chrome Version 19.0.1084.54 on Mac OS X 10.7.3.

    Read the article

  • Get More From Your Service Request

    - by Get Proactive Customer Adoption Team
    Leveraging Service Request Best Practices Use best practices to get there faster. In the daily conversations I have with customers, they sometimes express frustration over their Service Requests. They often feel powerless to make needed changes, so their sense of frustration grows. To help you avoid some of the frustration you might feel in dealing with your Service Requests (SR), here are a few pointers that come from our best practice discussions. Be proactive. If you can anticipate some of the questions that Support will ask, or the information they may need, try to provide this up front, when you log the SR. This could be output from the Remote Diagnostic Agent (RDA), if this is a database issue, or the output from another diagnostic tool, if you’re an EBS customer. Any information you can supply that helps us understand the situation better, helps us resolve the issue sooner. As you use some of these tools proactively, you might even find the solution to the problem before you log an SR! Be right. Make sure you have the correct severity level. Since you select the initial severity level, it’s easy to accept the default without considering how significant this may be. Business impact is the driving factor, so make sure you take a moment to select the severity level that is appropriate to the situation. Also, make sure you ask us to change the severity level, should the situation dictate. Be responsive! If this is an important issue to you, quickly follow up on any action plan submitted to you by Oracle Support. The support engineer assigned to your Service Request will be able to move the issue forward more aggressively when they have the needed information. This is crucial in resolving your issues in a timely manner. Be thorough. If there are five questions in the action plan, make sure you provide an answer for all five questions in one response, rather than trickling them in one at a time. This will allow the engineer to look at all of the information as a whole and to avoid multiple trips to your SR, saving valuable time and getting you a resolution sooner. Be your own advocate! You know your situation best; make sure Oracle Support understands both how and why this issue is important to you and your company. Use the escalation process if you're concerned that your SR isn't going the right direction, the right pace, or through the right person. Don't wait until you're frustrated and angry. An escalation is as simple as a quick conversation on the phone and can be amazingly effective in getting your issues back on track. The support manager you speak with is empowered to make any needed changes. Be our partner. You can make your support experience better. When your SR has been resolved, you may receive a survey request. This is intended to get your feedback about how your SR went and what we can do to improve your overall support experience. Oracle Support is here to help you. Our goal with any Service Request is to provide the best possible solution as quickly as possible. With your help, we’ll be able to do this with your Service Request too.  

    Read the article

  • Asynchrony in C# 5 (Part II)

    - by javarg
    This article is a continuation of the series of asynchronous features included in the new Async CTP preview for next versions of C# and VB. Check out Part I for more information. So, let’s continue with TPL Dataflow: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part II: TPL Dataflow Definition (by quote of Async CTP doc): “TPL Dataflow (TDF) is a new .NET library for building concurrent applications. It promotes actor/agent-oriented designs through primitives for in-process message passing, dataflow, and pipelining. TDF builds upon the APIs and scheduling infrastructure provided by the Task Parallel Library (TPL) in .NET 4, and integrates with the language support for asynchrony provided by C#, Visual Basic, and F#.” This means: data manipulation processed asynchronously. “TPL Dataflow is focused on providing building blocks for message passing and parallelizing CPU- and I/O-intensive applications”. Data manipulation is another hot area when designing asynchronous and parallel applications: how do you sync data access in a parallel environment? how do you avoid concurrency issues? how do you notify when data is available? how do you control how much data is waiting to be consumed? etc.  Dataflow Blocks TDF provides data and action processing blocks. Imagine having preconfigured data processing pipelines to choose from, depending on the type of behavior you want. The most basic block is the BufferBlock<T>, which provides an storage for some kind of data (instances of <T>). So, let’s review data processing blocks available. Blocks a categorized into three groups: Buffering Blocks Executor Blocks Joining Blocks Think of them as electronic circuitry components :).. 1. BufferBlock<T>: it is a FIFO (First in First Out) queue. You can Post data to it and then Receive it synchronously or asynchronously. It synchronizes data consumption for only one receiver at a time (you can have many receivers but only one will actually process it). 2. BroadcastBlock<T>: same FIFO queue for messages (instances of <T>) but link the receiving event to all consumers (it makes the data available for consumption to N number of consumers). The developer can provide a function to make a copy of the data if necessary. 3. WriteOnceBlock<T>: it stores only one value and once it’s been set, it can never be replaced or overwritten again (immutable after being set). As with BroadcastBlock<T>, all consumers can obtain a copy of the value. 4. ActionBlock<TInput>: this executor block allows us to define an operation to be executed when posting data to the queue. Thus, we must pass in a delegate/lambda when creating the block. Posting data will result in an execution of the delegate for each data in the queue. You could also specify how many parallel executions to allow (degree of parallelism). 5. TransformBlock<TInput, TOutput>: this is an executor block designed to transform each input, that is way it defines an output parameter. It ensures messages are processed and delivered in order. 6. TransformManyBlock<TInput, TOutput>: similar to TransformBlock but produces one or more outputs from each input. 7. BatchBlock<T>: combines N single items into one batch item (it buffers and batches inputs). 8. JoinBlock<T1, T2, …>: it generates tuples from all inputs (it aggregates inputs). Inputs could be of any type you want (T1, T2, etc.). 9. BatchJoinBlock<T1, T2, …>: aggregates tuples of collections. It generates collections for each type of input and then creates a tuple to contain each collection (Tuple<IList<T1>, IList<T2>>). Next time I will show some examples of usage for each TDF block. * Images taken from Microsoft’s Async CTP documentation.

    Read the article

  • Correct permissions for /var/www and wordpress

    - by dpbklyn
    Hello and thank you in advance! I am relatively new to ubuntu, so please excuse the newbie-ness of this question... I have set up a LAMP server (ubuntu server 11.10) and I have access via SSH and to the "it works" page from a web browser from inside my network (via ip address) and from outside using dyndns. I have a couple of projects in development with some outside developers and I want to use this server as a development server for testing and for client approvals. We have some Wordpress projects that sit in subdirectories in /var/www/wordpress1 /var/www/wordpress2, etc. I cannot access these sub directories from a browser in order to set up WP--or (I assume) to see the content on a browser. I get a 403 Forbidden error on my browser. I assume that this is a permissions problem. Can you please tell me the proper settings for the permissions to: 1) Allow the developers and me to read/write. 2) to allow WP set up and do its thing 3) Allow visitors to access the site(s) via the web. I should also mention that the subfolder are actually simlinks to folder on another internal hdd--I don't think this will make a difference, but I thought I should disclose. Since I am a newbie to ubuntu, step-by-step directions are greatly appreciated! Thank you for taking the time! dp total 12 drwxr-xr-x 2 root root 4096 2012-07-12 10:55 . drwxr-xr-x 13 root root 4096 2012-07-11 20:02 .. lrwxrwxrwx 1 root root 43 2012-07-11 20:45 admin_media -> /root/django_src/django/contrib/admin/media -rw-r--r-- 1 root root 177 2012-07-11 17:50 index.html lrwxrwxrwx 1 root root 14 2012-07-11 20:42 media -> /hdd/web/media lrwxrwxrwx 1 root root 18 2012-07-12 10:55 wordpress -> /hdd/web/wordpress Here is the result of using chown -R www-data:www-data /var/www total 12 drwxr-xr-x 2 www-data www-data 4096 2012-07-12 10:55 . drwxr-xr-x 13 root root 4096 2012-07-11 20:02 .. lrwxrwxrwx 1 www-data www-data 43 2012-07-11 20:45 admin_media -> /root/django_src/django/contrib/admin/media -rw-r--r-- 1 www-data www-data 177 2012-07-11 17:50 index.html lrwxrwxrwx 1 www-data www-data 14 2012-07-11 20:42 media -> /hdd/web/media lrwxrwxrwx 1 www-data www-data 18 2012-07-12 10:55 wordpress -> /hdd/web/wordpress I am still unable to access via browser...

    Read the article

  • how did Google Analytics kill my site?

    - by user1813359
    Yesterday I created a google analytics profile for one of my sites and included the JS block in the layout template. What happened next was very strange. Within about 2 minutes, the site had become unreachable. I had been checking the AWStats page for the site when I thought to set up GA. After that had been done, I clicked on the link for 404 stats, which opens in a new tab. It churned for a long while and then showed a nearly blank page, similar to that when Firefox chokes on a badly-formatted XML page, except there was no error msg. But i was logged into the server and could see that that page has a 401 Transitional DTD. Strange! I tried viewing source but it just churned endlessly. I then tried "inspect element" and was able to see an error msg having to do with some internal Firefox lib. Unfortunately, i neglected to copy that. :-( All further attempts to load anything on the site would time out. Firebug's Net panel showed no request being made. Chrome would time out. So, I deleted the GA profile, removed the JS block, and cleared the server cache. No joy. I then removed all google cookies and disabled JS. Still nothing. No luck in any other browser. And now my client couldn't access the site. Terrific. I was able use wget while logged into another server. The retrieved page was fine, and did not contain the GA JS block. However, the two servers are on the same network. (Perhaps a clue.) The server itself was fine. Ping, traceroute looked great. I could SSH in. I tailed the access log and tried a browser request. Nothing. But i forgot to quit and a minute or so later I saw a request from someone else being logged. Later, I could see that requests had been served all day to some people. Now, 24 hours later, the site works once again, but is still unreachable by the client (who is in another city). So, does anyone have some insight into what's going on? Does this have something to do with google's CDN? I don't know very much about how GA works but what I'm seeing reminds me of DNS propagation issues. And why the initial XML error? And why the heck was the site just plain unreachable? What did google do to my site?! Sorry for the length but I wanted to cover everything.

    Read the article

  • Ubuntu boot hangs after message "Running /scripts/init-bottom ... done"

    - by Douglas B. Staple
    I've been trying to copy a Proxmox container based on the Ubuntu Precise Standard template to a VirtualBox VM. I am now stuck at a point where my new Ubuntu/VirtualBox VM hangs after the message "Running /scripts/init-bottom ... done" during boot. I started by installing Ubuntu Server 12.04.4 LTS on a VirtualBox VM. Ubuntu Server 12.04.4 LTS was the closest "official" Ubuntu ISO to the Proxmox container OS I could find. I installed all updates on both the Proxmox container and on the VirtualBox VM. The idea was to get same version kernal running on the ProxMox container and VirtualBox VM. sudo apt-get update ; sudo apt-get upgrade ; sudo apt-get dist-upgrade sudo reboot rsync the entire proxmox container to a temporary directory in the VirtualBox VM: cd / mkdir /tmp/backup rsync -e ssh -av --exclude={/dev,/proc,/sys,/tmp,/run,/mnt,/media,/lost+found,/boot,/selinux} root@my_proxmox_container_hostname:/ /tmp/backup Shut down the virtual machine, and boot the VM with a bootable linux image. I used the Desktop image of Ubuntu 12.04 LTS, ubuntu-12.04.4-desktop-i386.iso Drop to a root prompt. Mount the VM root filesystem: sudo mount /dev/sda1 /mnt Remove files from most of /mnt cd /mnt sudo rm -rf bin etc home lib opt sbin root usr var Move all of the files from /mnt/backup into /mnt sudo mv /mnt/tmp/backup/* /mnt Rebooted system. For me, at this point the system freezes after starting, after the message: Running /scripts/init-bottom ... done I've tried reinstalling GRUB and all manner of other thing. I am almost ready to give up.

    Read the article

  • Cant connect to mysql using self signed SSL certificate

    - by carpii
    After creating a self-signed SSL certificate, I have configured my remote mysqld to use them (and ssl is enabled) I ssh into my remote server, and try connecting to its own mysqld using ssl (mysql server is 5.5.25).. ~> mysql -u <user> -p --ssl=1 --ssl-cert=client.cert --ssl-key=client.key --ssl-ca=ca.cert Enter password: ERROR 2026 (HY000): SSL connection error: error:00000001:lib(0):func(0):reason(1) Ok, I remember reading theres some problem with connecting to the same server via SSL. So I download the client keys down to my local box, and test from there... ~> mysql -h <server> -u <user> -p --ssl=1 --ssl-cert=client.cert --ssl-key=client.key --ssl-ca=ca.cert Enter password: ERROR 2026 (HY000): SSL connection error Its unclear what this "SSL connection error" error refers to, but if I omit the -ssl-ca, then I am able to connect using SSL.. ~> mysql -h <server> -u <user> -p --ssl=1 --ssl-cert=client.cert --ssl-key=client.key Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 37 Server version: 5.5.25 MySQL Community Server (GPL) However, I believe that this is only encrypting the connection, and not actually verifying the validity of the cert (meaning I would be potentially vulnerable to man-in-middle attack) The ssl certs are valid (albeit self signed), and do not have a passphrase on them So my question is, what am I doing wrong? How can I connect via SSL, using a self signed certificate? MySQL Server version is 5.5.25 and the server and clients are Centos 5 Thanks for any advice Edit: Note that in all cases, the command is being issued from the same directory where the ssl keys reside (hence no absolute path)

    Read the article

  • Pain removing a perl rootkit

    - by paul.ago
    So, we host a geoservice webserver thing at the office. Someone apparently broke into this box (probably via ftp or ssh), and put some kind of irc-managed rootkit thing. Now I'm trying to clean the whole thing up, I found the process pid who tries to connect via irc, but i can't figure out who's the invoking process (already looked with ps, pstree, lsof) The process is a perl script owned by www user, but ps aux |grep displays a fake file path on the last column. Is there another way to trace that pid and catch the invoker? Forgot to mention: the kernel is 2.6.23, which is exploitable to become root, but I can't touch this machine too much, so I can't upgrade the kernel EDIT: lsof might help: lsof -p 9481 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAMEss perl 9481 www cwd DIR 8,2 608 2 /ss perl 9481 www rtd DIR 8,2 608 2 /ss perl 9481 www txt REG 8,2 1168928 38385 /usr/bin/perl5.8.8ss perl 9481 www mem REG 8,2 135348 23286 /lib64/ld-2.5.soss perl 9481 www mem REG 8,2 103711 23295 /lib64/libnsl-2.5.soss perl 9481 www mem REG 8,2 19112 23292 /lib64/libdl-2.5.soss perl 9481 www mem REG 8,2 586243 23293 /lib64/libm-2.5.soss perl 9481 www mem REG 8,2 27041 23291 /lib64/libcrypt-2.5.soss perl 9481 www mem REG 8,2 14262 23307 /lib64/libutil-2.5.soss perl 9481 www mem REG 8,2 128642 23303 /lib64/libpthread-2.5.soss perl 9481 www mem REG 8,2 1602809 23289 /lib64/libc-2.5.soss perl 9481 www mem REG 8,2 19256 38662 /usr/lib64/perl5/5.8.8/x86_64-linux-threa d-multi/auto/IO/IO.soss perl 9481 www mem REG 8,2 21328 38877 /usr/lib64/perl5/5.8.8/x86_64-linux-threa d-multi/auto/Socket/Socket.soss perl 9481 www mem REG 8,2 52512 23298 /lib64/libnss_files-2.5.soss perl 9481 www 0r FIFO 0,5 1068892 pipess perl 9481 www 1w FIFO 0,5 1071920 pipess perl 9481 www 2w FIFO 0,5 1068894 pipess perl 9481 www 3u IPv4 130646198 TCP 192.168.90.7:60321-www.**.net:ircd (SYN_SENT)

    Read the article

  • vCenter appliance won't use mail relay server

    - by Safado
    tl;dr: - sendmail is configured to use a relay server but still insists on using 127.0.01 as the relay, which results in mail not being sent. We have the open source vCenter appliance (v 5.0) managing our ESXi cluster. When connected to it via vSphere Client, you can configure the SMTP relay server to use by going to Administration > vCenter Server Settings > MAIL. There you can set the SMTP Server value. I looked through their documentation and also confirmed on the phone with support that all you have to do to configure mail is to put in the relay IP or fqdn in that box and hit OK. Well, I had done that and mail still wasn't sending. So I SSH into the server (which is SuSE) and look at /var/log/mail and it looks like it's trying to relay the email through 127.0.0.1 and it's rejecting it. So looking through the config files, I see there's /etc/sendmail.cf and /etc/mail/submit.cf. You can configure items in /etc/sysconfig/sendmail and run SuSEconfig --module sendmail to generate those to .cf files based on what's in /etc/sysconfig/sendmail. So playing around, I see that when you set the SMTP Server value in the vCenter gui, all that it does is change the "DS" line in /etc/mail/submit.cf to have DS[myrelayserver.com]. Looking on the internet, it would appear that the DS line is really the only thing you need to change in order to use a relay server. I got on the phone with VMWare support and spent 2 hours trying to modify ANY setting that had anything to do with relays and we couldn't get it to NOT use 127.0.0.1 as the relay. Just to note, any time we made any sort of configuration change, we restarted the sendmail service. Does anyone know whats going on? Have any ideas on how I can fix this?

    Read the article

  • OSX - User home directories shared via NFS

    - by Hugh
    Hi, I've run into some problems with how I've got user home directories set up on our system here. Our server is an XServe, using Open Directory to manage the user accounts. The majority of our workstations are OSX, but there are a few running Linux (Centos 5.3), and, as time goes on, we expect the proportion of Linux workstations to increase (at some point, we expect to move the server side over to Linux too, but for now we're running with what we've already got) To ensure that the Linux and OSX workstations both see user's home directories in the same place, I shared the home directories using NFS. On the server end, the home directories are stored in: /Volumes/data/company_users This is mounted on the workstations to: /mount/company_users This work fine on the Linux workstations, but there is some weirdness under OSX. For the user who is logged in through the GUI, it all works just fine. However, if a user tries to SSH into a machine that they are not the primary user on, they often have no access to their own home directory. It looks as though OSX is trying to do something else to the user home directories mount point when you log in through the GUI.... For example, on this machine (nv001), I (hugh) am logged into the GUI. Last login: Mon Mar 8 18:17:52 on ttys011 [nv001:~] hugh% ls -al /mount/company_users total 40 drwxrwxrwx 26 hugh wheel 840 27 Jan 19:09 . drwxr-xr-x 6 admin admin 204 19 Dec 18:36 .. drwx------+ 128 hugh staff 4308 27 Feb 23:36 hugh drwx------+ 26 matt staff 840 4 Dec 14:14 matt [nv001:~] hugh% So Matt's home directory is accessible to him. However, if I try to switch to him: [nv001:~] hugh% su - matt Password: su: no directory [nv001:~] hugh% Or: [nv001:~] hugh% su matt Password: tcsh: Permission denied tcsh: Trying to start from "/mount/company_users/matt" tcsh: Trying to start from "/" [nv001:/] matt% Does anyone have any idea why it might be doing this? It's causing me all sorts of problems at the moment... The only machine that I can successfully switch users at the moment is the server that the user directories are stored on, where /mount/company_users is actually just a symlink to /Volumes/data/company_users Thanks

    Read the article

  • javaws not found

    - by Hunt
    I have a server which has centos installed in it. Recently I have installed jdk 1.6 into it. When I try to run java command from shell it's working perfectly fine. Java is stored into /usr/java/jdk1.6.0_25 and path is set to /usr/bin/ when I type which java. When I tried running javaws (which comes with the jdk 1.6 only) it is showing me following error: Java Web Start splash screen process exiting ... Bad installation: JAVAWS_HOME not set: No such file or directory Executing env command prints following details: HOSTNAME=XX-XXX-XXX-XX TERM=xterm SHELL=/bin/bash HISTSIZE=1000 OLDPWD=/usr/java SSH_TTY=/dev/pts/1 USER=root LS_COLORS=no=00:fi=00:di=00;34:ln=00;36:pi=40;33:so=00;35:bd=40;33;01:cd=40;33;01:or=01;05;37;41:mi=01;05;37;41:ex=00;32:*.cmd=00;32:*.exe=00;32:*.com=00;32:*.btm=00;32:*.bat=00;32:*.sh=00;32:*.csh=00;32:*.tar=00;31:*.tgz=00;31:*.arj=00;31:*.taz=00;31:*.lzh=00;31:*.zip=00;31:*.z=00;31:*.Z=00;31:*.gz=00;31:*.bz2=00;31:*.bz=00;31:*.tz=00;31:*.rpm=00;31:*.cpio=00;31:*.jpg=00;35:*.gif=00;35:*.bmp=00;35:*.xbm=00;35:*.xpm=00;35:*.png=00;35:*.tif=00;35: JAVA_PATH=/usr/java/jre1.6.0_24/jre/bin MAIL=/var/spool/mail/root PATH=/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/usr/java/jre1.6.0_24/jre/bin INPUTRC=/etc/inputrc PWD=/usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin JAVA_HOME=/usr/java/jre1.6.0_24/jre/bin LANG=en_US.UTF-8 SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass SHLVL=1 HOME=/root LOGNAME=root JAVAWS_HOME=/usr/java/jre1.6.0_24/bin SSH_CONNECTION=175.100.170.26 3387 64.150.190.94 22 LESSOPEN=|/usr/bin/lesspipe.sh %s G_BROKEN_FILENAMES=1 _=/bin/env

    Read the article

  • mount.nfs: access denied by server while mounting (null), can't find any log information

    - by Mark0978
    Two ubuntu servers: 10.0.8.2 is the client, 192.168.20.58 is the server. Between the 2 machines, Ping works, ssh works (in both directions). From 10.0.8.2 showmount -e 192.168.20.58 Export list for 192.168.20.58: /imr/nfsshares/foobar 10.0.8.2 mount.nfs 192.168.20.58:/imr/nfsshares/foobar /var/data/foobar -v mount.nfs: access denied by server while mounting (null) Found several things online, tried them all and still can't find any log information anywhere. On the server: [email protected]:/var/log# cat /etc/hosts.allow sendmail: all ALL: 10.0.8.2 /etc/hosts.deny is all comments How can I get a trail of log statements to figure this out? What does it take to get some logging so I have some idea of WHY it won't mount? On the server: [email protected]# nmap -sR RPC 192.168.20.58 Starting Nmap 5.21 ( http://nmap.org ) at 2012-07-04 21:16 CDT Failed to resolve given hostname/IP: RPC. Note that you can't use '/mask' AND '1-4,7,100-' style IP ranges Nmap scan report for 192.168.20.58 Host is up (0.0000060s latency). Not shown: 988 closed ports PORT STATE SERVICE VERSION 22/tcp open unknown 80/tcp open unknown 111/tcp open unknown 139/tcp open unknown 445/tcp open unknown 902/tcp open unknown 2049/tcp open unknown 3000/tcp open unknown 5666/tcp open unknown 8009/tcp open unknown 8222/tcp open unknown 8333/tcp open unknown Nmap done: 1 IP address (1 host up) scanned in 3.81 seconds From the client: [email protected]:~$ nmap -sR RPC 192.168.20.58 Starting Nmap 5.21 ( http://nmap.org ) at 2012-07-04 22:14 EDT Failed to resolve given hostname/IP: RPC. Note that you can't use '/mask' AND '1-4,7,100-' style IP ranges Nmap scan report for 192.168.20.58 Host is up (0.73s latency). Not shown: 988 closed ports PORT STATE SERVICE VERSION 22/tcp open unknown 80/tcp open unknown 111/tcp open rpcbind (rpcbind V2) 2 (rpc #100000) 139/tcp open unknown 445/tcp open unknown 902/tcp open unknown 2049/tcp open nfs (nfs V2-4) 2-4 (rpc #100003) 3000/tcp open unknown 5666/tcp open unknown 8009/tcp open unknown 8222/tcp open unknown 8333/tcp open unknown Nmap done: 1 IP address (1 host up) scanned in 191.56 seconds

    Read the article

  • Account Lockout with pam_tally2 in RHEL6

    - by Aaron Copley
    I am using pam_tally2 to lockout accounts after 3 failed logins per policy, however, the connecting user does not receive the error indicating pam_tally2's action. (Via SSH.) I expect to see on the 4th attempt: Account locked due to 3 failed logins No combination of required or requisite or the order in the file seems to help. This is under Red Hat 6, and I am using /etc/pam.d/password-auth. The lockout does work as expected but the user does not receive the error described above. This causes a lot of confusion and frustration as they have no way of knowing why authentication fails when they are sure they are using the correct password. Implementation follows NSA's Guide to the Secure Conguration of Red Hat Enterprise Linux 5. (pg.45) It's my understanding that that only thing changed in PAM is that /etc/pam.d/sshd now includes /etc/pam.d/password-auth instead of system-auth. If locking out accounts after a number of incorrect login attempts is required by your security policy, implement use of pam_tally2.so. To enforce password lockout, add the following to /etc/pam.d/system-auth. First, add to the top of the auth lines: auth required pam_tally2.so deny=5 onerr=fail unlock_time=900 Second, add to the top of the account lines: account required pam_tally2.so EDIT: I get the error message by resetting pam_tally2 during one of the login attempts. user@localhost's password: (bad password) Permission denied, please try again. user@localhost's password: (bad password) Permission denied, please try again. (reset pam_tally2 from another shell) user@localhost's password: (good password) Account locked due to ... Account locked due to ... Last login: ... [user@localhost ~]$

    Read the article

  • How to use Salt Stack with minions all behind NAT (not publicly accessible, default salt ports not open)?

    - by MountainX
    Can Salt Stack minions communicate with the salt master from behind NAT/Firewalls, etc., using standard ports that would be open be default in all consumer NAT routers (and without the minions having a public DNS record or static IP)? I'm working my way through my first salt tutorial, and this is where I'm stuck. I am able to configure iptables on the Ubuntu salt-master. But I have no control over the routers/NAT that the minions will sit behind. So far I tried these settings: /etc/salt/master: publish_port: 465 ret_port: 443 /etc/salt/minion: master_port: 465 That did not work. Background: I have a custom developed application presently running on about 40 Kubuntu laptops (& more planned). Every few months I have to update the application. (Often this just amounts to replacing a .jar file, which requires root permissions.) I also have to run Ubuntu updates and a few other minor things. I've been doing it manually, one by one, using Team Viewer to log into each client. I would like to dramatically improve this process. The two options I'm aware of are either: use reverse ssh tunnels and bash scripts. I tested this and it works. But I don't get any of the reporting, etc., I would get with Salt Stack. use Salt Stack (or similar) management tool. But I need a really simple tool. I can't invest any time in a big learning curve. I looked at Puppet and a bunch of related tools. The only one I found that looked simple enough for me (so far) was Salt Stack. But I'm stuck now because my minion can't reach the salt-master, as stated above. I appreciate suggestions.

    Read the article

  • Telnet does not give a response

    - by floorish
    Some wireless access points are acting a little weird, so I want to reboot them every couple of hours. Luckily there exists a security flaw which lets me login as root through telnet when using port 1111 (without username and password). Now I want to use that to let my QNAP NAS execute the reboot command through telnet every now and then. The problem is however that that telnet version doesn't give any response if I connect to the AP. The telnet I use on OSX works just fine but the one on the NAS not. BusyBox v1.01 (2012.06.14-18:35+0000) multi-call binary Usage: telnet [-a] [-l USER] HOST [PORT] When I execute telnet <HOST> 1111 nothing happens. I can send the escape character ^] which gives me the following options: Console escape. Commands are: l go to line mode c go to character mode z suspend telnet e exit telnet The only way to get some commands executed is by suspending telnet with z followed by some random command which isn't recognized. Then the prompt shows this: # telnet 192.168.1.5 1111 ^] Console escape. Commands are: l go to line mode c go to character mode z suspend telnet e exit telnet z continuing... asdf Illegal command. 00> After that I am able to communicate with the AP, but when I exit the telnet session and try the same again, the AP refuses to connect at all and it must be manually rebooted (looks like the telnet session isn't shut down properly on the AP). So the question is what commands should I execute in order to communicate with the AP using the Busybox telnet version of the QNAP? (No, can't use ssh unfortunately)

    Read the article

  • Error cloning gitosis-admin on new setup

    - by michaelmior
    I have the following in my gitosis.conf. (Created via gitsosis-init < id_rsa.pub with the key from my laptop) [gitosis] loglevel = DEBUG [group gitosis-admin] writable = gitosis-admin members = michael@laptop When I try git clone git@SERVER:gitsos-admin.git, I get the following errors: Initialized empty Git repository in /home/michael/gitsos-admin/.git/ DEBUG:gitosis.serve.main:Got command "git-upload-pack 'gitsos-admin.git'" DEBUG:gitosis.access.haveAccess:Access check for 'michael@laptop' as 'writable' on 'gitsos-admin.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'gitsos-admin.git', new value 'gitsos-admin' DEBUG:gitosis.group.getMembership:found 'michael@laptop' in 'gitosis-admin' DEBUG:gitosis.access.haveAccess:Access check for 'michael@laptop' as 'writeable' on 'gitsos-admin.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'gitsos-admin.git', new value 'gitsos-admin' DEBUG:gitosis.group.getMembership:found 'michael@laptop' in 'gitosis-admin' DEBUG:gitosis.access.haveAccess:Access check for 'michael@laptop' as 'readonly' on 'gitsos-admin.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'gitsos-admin.git', new value 'gitsos-admin' DEBUG:gitosis.group.getMembership:found 'michael@laptop' in 'gitosis-admin' ERROR:gitosis.serve.main:Repository read access denied fatal: The remote end hung up unexpectedly I know my key is being accepted because I have tried logging in via SSH and although a terminal won't be allocated, the authorization works.

    Read the article

  • Accidentally deleted symlink libc.so.6 in CentOS 6.4. How to get sudo privilege to re-create it?

    - by Eric
    I accidentally deleted the symbol link /lib64/libc.so.6 - /lib64/libc-2.12.so with $ sudo rm libc.so.6 Then I can not use anything including ls command. The error appears for any command I type ls: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory I've tried $ export LD_PRELOAD=/lib64/libc-2.12.so After this I can use ls and ln ..., but still can not use sudo ln ..., sudo -E ln ..., sudo su or even su. I always get this err sudo: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory or su: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory It seems LD_PRELOAD works only for the current shell session of my account, but not for a new account like root or a new session. It's a remote server so I can not use a live CD. I now have a ssh bash session alive but can not establish new ones. I have sudo privilege, but don't have root password. So currently my problem is I need to run sudo sln -s libc-2.12.so libc.so.6 to re-create the symlink libc.so.6, but I can not run sudo without libc.so.6. How can I fix it? Thanks~

    Read the article

  • QoS for Cisco Router to Prioritize Voice and Interactive Traffic

    - by TJ Huffington
    I have a Cisco 891W NATing Voice and Data to the internet over a 10mbit/2mbit connection. Voice traffic gets degraded when I upload large files. Pings time out as well. I tried to configure a QoS policy but it's basically not doing anything. Voice traffic still degrades when upload bandwidth gets saturated. Here is my current configruation: class-map match-any QoS-Transactional match protocol ssh match protocol xwindows class-map match-any QoS-Voice match protocol rtp audio class-map match-any QoS-Bulk match protocol secure-nntp match protocol smtp match protocol tftp match protocol ftp class-map match-any QoS-Management match protocol snmp match protocol dns match protocol secure-imap class-map match-any QoS-Inter-Video match protocol rtp video class-map match-any QoS-Voice-Control match access-group name Voice-Control policy-map QoS-Priority-Output class QoS-Voice priority percent 25 set dscp ef class QoS-Inter-Video bandwidth remaining percent 10 set dscp af41 class QoS-Transactional bandwidth remaining percent 25 random-detect dscp-based set dscp af21 class QoS-Bulk bandwidth remaining percent 5 random-detect dscp-based set dscp af11 class QoS-Management bandwidth remaining percent 1 set dscp cs2 class QoS-Voice-Control priority percent 5 set dscp ef class class-default fair-queue interface FastEthernet8 bandwidth 1024 bandwidth receive 20480 ip address dhcp ip nat outside ip virtual-reassembly duplex auto speed auto auto discovery qos crypto map mymap max-reserved-bandwidth 80 service-policy output QoS-Priority-Output crypto map mymap 10 ipsec-isakmp set peer 1.2.3.4 default set transform-set ESP-3DES-SHA match address 110 qos pre-classify ! fa8 is my connection to the internet. Voice traffic goes over a VPN ("mymap") to the SIP server. That's why I specified "qos pre-classify" which I believe is the way to classify traffic over the VPN. However even when I ping a public IP while saturating upload bandwidth, the latency is exceptionally high. Is this configuration correct? Are there any suggestions that might make this work for my setup? Thanks in advance.

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >