Search Results

Search found 22173 results on 887 pages for 'concerned client'.

Page 541/887 | < Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >

  • Games development with a game loop that's abstracted away

    - by Davy8
    Most game development happens with a main game loop. Are there any good articles/blog posts/discussions about games without a game loop? I imagine they'd mostly be web games, but I'd be interested in hearing otherwise. (As a side note, I think it's really interesting that the concept is almost exclusively used in gaming as far as I'm aware, perhaps that may be another question.) Edit: I realize there's probably a redraw loop somewhere. I guess what I really mean is a loop that is hidden to you. Frames are something you as the developer are not concerned with as you're working on a higher level of abstraction. E.g. someLootItem.moveTo(inventory, someAnimatationType) and that will move from the loot box to your inventory using the specified animation type without the game developer having to worry about the implementation details of that animation. Maybe that's how "real" games end up working, but from reading most tutorials they seem to imply a much more granular level of control is used, but that might just be an artifact of being a tutorial. Edit2: I think most people are misunderstanding what I'm trying to ask, likely because I'm having trouble describing exactly what I'm trying to ask. After some more thinking perhaps what I'm referring to is more along the lines of what I believe is referred to as "scripting" where you're working at a very high level and having some game engine take care of the low level details. For example, take custom maps in Starcraft II or Warcraft III. Many of the "maps" have gameplay that deviates enough from the primary game that they could be considered a separate game written on the same engine. What I'm referring to then is along those lines. I may be wrong because I only dabbed in the Warcraft III editor, but as far as I remember no where in the map editor do you control the game loop, and yet you can create many different games out of it. In my mind, these are games in their own right. If you're playing DotA you don't say you're playing Warcraft III, you say you're playing DotA because that's the actual game you're playing. Such a system may impose limitations that don't exist if you're creating a game from scratch, but it greatly reduces development time because much of the "hard" work has already been done for you. Hopefully that clarifies what I'm asking. Another example of what is I mean, is when you write a web app, of course it communicates through sockets and TCP. But does the average web developer doesn't explicitly write code for connecting sockets. They just need to know about receiving a request and sending a response. There are unique scenarios where you do occasionally need to use raw sockets, but it's generally rare in web development. In a similar fashion, it's very possible to write a game without directly using the game loop, even though one is used behind the scenes. Probably not a AAA title, but there must be hundreds of smaller scale games that can and possibly are written this way. Are there any good resources on writing these "simpler" games?

    Read the article

  • The Power to Control Power

    - by speakjava
    I'm currently working on a number of projects using embedded Java on the Raspberry Pi and Beagle Board.  These are nice and small, so don't take up much room on my desk as you can see in this picture. As you can also see I have power and network connections emerging from under my desk.  One of the (admittedly very minor) drawbacks of these systems is that they have no on/off switch.  Instead you insert or remove the power connector (USB for the RasPi, a barrel connector for the Beagle).  For the Beagle Board this can potentially be an issue; with the micro-SD card located right next to the connector it has been known for people to eject the card when trying to power off the board, which can be quite serious for the hardware. The alternative is obviously to leave the boards plugged in and then disconnect the power from the outlet.  Simple enough, but a picture of underneath my desk shows that this is not the ideal situation either. This made me think that it would be great if I could have some way of controlling a mains voltage outlet using a remote switch or, even better, from software via a USB connector.  A search revealed not much that fit my requirements, and anything that was close seemed very expensive.  Obviously the only way to solve this was to build my own.Here's my solution.  I decided my system would support both control mechanisms (remote physical switch and USB computer control) and be modular in its design for optimum flexibility.  I did a bit of searching and found a company in Hong Kong that were offering solid state relays for 99p plus shipping (£2.99, but still made the total price very reasonable).  These would handle up to 380V AC on the output side so more than capable of coping with the UK 240V supply.  The other great thing was that being solid state, the input would work with a range of 3-32V and required a very low current of 7.5mA at 12V.  For the USB control an Arduino board seemed the obvious low-cost and simple choice.  Given the current requirments of the relay, the Arduino would not require the additional power supply and could be powered just from the USB.Having secured the relays I popped down to Homebase for a couple of 13A sockets, RS for a box and an Arduino and Maplin for a toggle switch.  The circuit is pretty straightforward, as shown in the diagram (only one output is shown to make it as simple as possible).  Originally I used a 2 pole toggle switch to select the remote switch or USB control by switching the negative connections of the low voltage side.  Unfortunately, the resistance between the digital pins of the Arduino board was not high enough, so when using one of the remote switches it would turn on both of the outlets.  I changed to a 4 pole switch and isolated both positive and negative connections. IMPORTANT NOTE: If you want to follow my design, please be aware that it requires working with mains voltages.  If you are at all concerned with your ability to do this please consult a qualified electrician to help you.It was a tight fit, especially getting the Arduino in, but in the end it all worked.  The completed box is shown in the photos. The remote switch was pretty simple just requiring the squeezing of two rocker switches and a 9V battery into the small RS supplied box.  I repurposed a standard stereo cable with phono plugs to connect the switch box to the mains outlets.  I chopped off one set of plugs and wired it to the rocker switches.  The photo shows the RasPi and the Beagle board now controllable from the switch box on the desk. I've tested the Arduino side of things and this works fine.  Next I need to write some software to provide an interface for control of the outlets.  I'm thinking a JavaFX GUI would be in keeping with the total overkill style of this project.

    Read the article

  • SharePoint: Numeric/Integer Site Column (Field) Types

    - by CharlesLee
    What field type should you use when creating number based site columns as part of a SharePoint feature? Windows SharePoint Services 3.0 provides you with an extensible and flexible method of developing and deploying Site Columns and Content Types (both of which are required for most SharePoint projects requiring list or library based data storage) via the feature framework (more on this in my next full article.) However there is an interesting behaviour when working with a column or field which is required to hold a number, which I thought I would blog about today. When creating Site Columns in the browser you get a nice rich UI in order to choose the properties of this field: However when you are recreating this as a feature defined in CAML (Collaborative Application Mark-up Language), which is a type of XML (more on this in my article) then you do not get such a rich experience.  You would need to add something like this to the element manifest defined in your feature: <Field SourceID="http://schemas.microsoft.com/sharepoint/3.0"        ID="{C272E927-3748-48db-8FC0-6C7B72A6D220}"        Group="My Site Columns"        Name="MyNumber"        DisplayName="My Number"        Type="Numeric"        Commas="FALSE"        Decimals="0"        Required="FALSE"        ReadOnly="FALSE"        Sealed="FALSE"        Hidden="FALSE" /> OK, its not as nice as the browser UI but I can deal with this. Hang on. Commas="FALSE" and yet for my number 1234 I get 1,234.  That is not what I wanted or expected.  What gives? The answer lies in the difference between a type of "Numeric" which is an implementation of the SPFieldNumber class and "Integer" which does not correspond to a given SPField class but rather represents a positive or negative integer.  The numeric type does not respect the settings of Commas or NegativeFormat (which defines how to display negative numbers.)  So we can set the Type to Integer and we are good to go.  Yes? Sadly no! You will notice at this point that if you deploy your site column into SharePoint something has gone wrong.  Your site column is not listed in the Site Column Gallery.  The deployment must have failed then?  But no, a quick look at the site columns via the API reveals that the column is there.  What new evil is this?  Unfortunately the base type for integer fields has this lovely attribute set on it: UserCreatable = FALSE So WSS 3.0 accordingly hides your field in the gallery as you cannot create fields of this type. However! You can use them in content types just like any other field (except not in the browser UI), and if you add them to the content type as part of your feature then they will show up in the UI as a field on that content type.  Most of the time you are not going to be too concerned that your site columns are not listed in the gallery as you will know that they are there and that they are still useable. So not as bad as you thought after all.  Just a little quirky.  But that is SharePoint for you.

    Read the article

  • Talking JavaOne with Rock Star Kirk Pepperdine

    - by Janice J. Heiss
    Kirk Pepperdine is not only a JavaOne Rock Star but a Java Champion and a highly regarded expert in Java performance tuning who works as a consultant, educator, and author. He is the principal consultant at Kodewerk Ltd. He speaks frequently at conferences and co-authored the Ant Developer's Handbook. In the rapidly shifting world of information technology, Pepperdine, as much as anyone, keeps up with what's happening with Java performance tuning. Pepperdine will participate in the following sessions: CON5405 - Are Your Garbage Collection Logs Speaking to You? BOF6540 - Java Champions and JUG Leaders Meet Oracle Executives (with Jeff Genender, Mattias Karlsson, Henrik Stahl, Georges Saab) HOL6500 - Finding and Solving Java Deadlocks (with Heinz Kabutz, Ellen Kraffmiller Martijn Verburg, Jeff Genender, and Henri Tremblay) I asked him what technological changes need to be taken into account in performance tuning. “The volume of data we're dealing with just seems to be getting bigger and bigger all the time,” observed Pepperdine. “A couple of years ago you'd never think of needing a heap that was 64g, but today there are deployments where the heap has grown to 256g and tomorrow there are plans for heaps that are even larger. Dealing with all that data simply requires more horse power and some very specialized techniques. In some cases, teams are trying to push hardware to the breaking point. Under those conditions, you need to be very clever just to get things to work -- let alone to get them to be fast. We are very quickly moving from a world where everything happens in a transaction to one where if you were to even consider using a transaction, you've lost." When asked about the greatest misconceptions about performance tuning that he currently encounters, he said, “If you have a performance problem, you should start looking at code at the very least and for that extra step, whip out an execution profiler. I'm not going to say that I never use execution profilers or look at code. What I will say is that execution profilers are effective for a small subset of performance problems and code is literally the last thing you should look at.And what is the most exciting thing happening in the world of Java today? “Interesting question because so many people would say that nothing exciting is happening in Java. Some might be disappointed that a few features have slipped in terms of scheduling. But I'd disagree with the first group and I'm not so concerned about the slippage because I still see a lot of exciting things happening. First, lambda will finally be with us and with lambda will come better ways.” For JavaOne, he is proctoring for Heinz Kabutz's lab. “I'm actually looking forward to that more than I am to my own talk,” he remarked. “Heinz will be the third non-Sun/Oracle employee to present a lab and the first since Oracle began hosting JavaOne. He's got a great message. He's spent a ton of time making sure things are going to work, and we've got a great team of proctors to help out. After that, getting my talk done, the Java Champion's panel session and then kicking back and just meeting up and talking to some Java heads."Finally, what should Java developers know that they currently do not know? “’Write Once, Run Everywhere’ is a great slogan and Java has come closer to that dream than any other technology stack that I've used. That said, different hardware bits work differently and as hard as we try, the JVM can't hide all the differences. Plus, if we are to get good performance we need to work with our hardware and not against it. All this implies that Java developers need to know more about the hardware they are deploying to.”

    Read the article

  • Type of computer for a developer on the road

    - by nabucosound
    Hi developers: I am planning to be traveling through eurasia and asia (russia, china, korea, japan, south east asia...) for a while and, although there are plenty of marvelous things to see and to do, I must keep on working :(. I am a python developer, dedicated mainly to web projects. I use django, sqlite3, browsers, and ocassionaly (only if I have no choice) I install postgres, mysql, apache or any other servers commonly used in the internets. I do my coding on vim, use ssh to connect, lftp to transfer files, IRC, grep/ack... So I spend most of my time in the terminal shells. But I also use IM or Skype to communicate with my clients and peers, as well as some other software (that after all is not mandatory for my day-to-day work). I currently work with a Macbook Pro (3 years old now) and so far I am very happy with the performance. But I don't want to carry it if I am going to be "on transit" for long time, it is simply huge and heavy for what I am planning to load in my rather small backpack (while traveling, less is more, you know). So here I am reading all kind of opinions about netbooks, because at first sight this is the kind of computer I thought I had to choose. I am going to use Linux for it, Microsoft is not my cup of tea and Mac is not available for them, unless I were to buy a Macbook air, something that I won't do because if I am robbed or rain/dust/truck loaders break it I would burst in tears. I am concerned about wifi performance and connectivity, I am going to use one of those linux distros/tools to hack/test on "open" networks (if you know what I mean) in case I am not in a place with real free wifi access and I find myself in an emergency. CPU speed should be acceptable, but since I don't plan to run Photoshop or expensive IDEs, I guess most of the time I won't be overloading the machine. Apart from this, maybe (surely) I am missing other features to consider. With that said (sorry about the length) here it comes my question, raised from a deep ignorance regarding the wars betweeb betbooks vs notebooks (I assume tablet PCs are not for programming yet): If I buy a netbook will I have to throw it away after 1 month on the road and buy a notebook? Or will I be OK? Thanks! Hector Update I have received great feedback so far! I would like to insist on the fact that I will be traveling through many different countries and scenarios. I am sure that while in Japan I will be more than fine with anything related to technology, connectivity, etc. But consider that I will be, for example, on a train through Russia (transsiberian) and will cross Mongolia as well. I will stay in friends' places sometimes, but most of the time I will have to work from hostel rooms, trains, buses, beaches (hey this last one doesn't sound too bad hehe!). I think some of your answers guys seem to focus on the geek part but loose the point of this "on the road" fact. I am very aware and agree that netbooks suck compared to notebooks, but what I am trying to do here is to find a balance and discover your experiences with netbooks to see first hand if a netbook will be a fail in the mid-long term of the trip for my purposes. So I have resumed the main concepts expressed here on this small list, in no particular order: keyboard/touchpad feel: I use vim so no need of moving mouse pointers that much, unless I am browsing the web, but intensive use of keyboard screen real state: again, terminal work for most of the time battery life: I think something very important weight/size: also very important looks not worth stealing it, don't give a shit if it is lost/stolen/broken: this may depend on kind of person, your economy, etc. Also to prevent losing work, I will upload EVERYTHING to the cloud whenever I'll have a chance. wifi: don't want to discover my wifi is one of those that cannot deal with half the routers on this planet or has poor connectivity. Thanks again for your answers and comments!

    Read the article

  • PowerShell and SMO – be careful how you iterate

    - by Fatherjack
    I’ve yet to have a totally smooth experience with PowerShell and it was late on Friday when I crashed into this problem. I haven’t investigated if this is a generally well understood circumstance and if it is then I apologise for repeating everything. Scenario: I wanted to scan a number of server for many properties, including existing logins and to identify which accounts are bestowed with sysadmin privileges. A great task to pass to PowerShell, so with a heavy heart I started up PowerShellISE and started typing. The script doesn’t come easily to me but I follow the logic of SMO and the properties and methods available with the language so it seemed something I should be able to master. Version #1 of my script. And the results it returns when executed against my home laptop server. These results looked good and for a long time I was concerned with other parts of the script, for all intents and purposes quite happy that this was an accurate assessment of the server. Let’s just review my logic for each step of the code at the top. Lines 1 to 7 just set up our variables and write out the header message Line 8 our first loop, to go through each login on the server Line 10 an inner loop that will assess each role name that each login has been assigned Line 11 a test to see if each role has the name ‘sysadmin’ Line 13 write out the login name with a bright format as it is a sysadmin login Line 17 write out the login name with no formatting It is quite possible that here someone with more PowerShell experience than me will be shouting at their screen pointing at the error I made but to me this made total sense. Until I altered the code, I altered lines 6 and 7 of code above to be: $c = $Svr.Logins.Count write-host “There are $c Logins on the server” This changed my output to look like this: This started alarm bells ringing – there are clearly not 13 logins listed So, let’s see where things are going wrong, edit the script so it looks like this. I’ve highlighted the changes to make Running this code shows me these results Our $n variable should count up by one for each login returned and We are clearly missing some logins. I referenced this list back to Management Studio for my server and see the Logins as below, where there are clearly 13 logins. We see a Login called Annette in SSMS but not in the script results so I opened that up and looked at its properties and it’s server roles in particular. The account has only public access to the server. Inspection of the other logins that the PowerShell script misses out show they too are only members of the public role. Right now I can’t work out whether there is a good reason for this and if it should be expected behaviour or not. Please spend a few minutes to leave a comment if you have an opinion or theory for this. How to get the full list of logins. Clearly I needed to get a full list of the logins so set about reviewing my code to see if there was a better way to iterate through the roles for each login. This is the code that I came up with and I think it is doing everything that I need it to. It gives me the expected results like this: So it seems that the ListMembers() method is the trouble maker in my first versions of the code. I would have expected that ListMembers should return Logins that are only members of the public role, certainly Technet makes no reference to it being left out in it’s Login.ListMembers details. Suffice to say, it’s a lesson learned and I will approach using it with caution in future circumstances.

    Read the article

  • SQL to select random mix of rows fairly [migrated]

    - by Matt Sieker
    Here's my problem: I have a set of tables in a database populated with data from a client that contains product information. In addition to the basic product information, there is also information about the manufacturer, and categories for those products (a product can be in one or more categories). These categories are then referred to as "Product Categories", and which stores these products are available at. These tables are updated once a week from a feed from the customer. Since for our purposes, some of the product categories are the same, or closely related for our purposes, there is another level of categories called "General Categories", a general category can have one or more product categories. For the scope of these tables, here's some rough numbers: Data Tables: Products: 475,000 Manufacturers: 1300 Stores: 150 General Categories: 245 Product Categories: 500 Mapping Tables: Product Category -> Product: 655,000 Stores -> Products: 50,000,000 Now, for the actual problem: As part of our software, we need to select n random products, given a store and a general category. However, we also need to ensure a good mix of manufacturers, as in some categories, a single manufacturer dominates the results, and selecting rows at random causes the results to strongly favor that manufacturer. The solution that is currently in place, works for most cases, involves selecting all of the rows that match the store and category criteria, partition them on manufacturer, and include their row number from within their partition, then select from that where the row number for that manufacturer is less than n, and use ROWCOUNT to clamp the total rows returned to n. This query looks something like this: SET ROWCOUNT 6 select p.Id, GeneralCategory_Id, Product_Id, ISNULL(m.DisplayName, m.Name) AS Vendor, MSRP, MemberPrice, FamilyImageName from (select p.Id, gc.Id GeneralCategory_Id, p.Id Product_Id, ctp.Store_id, Manufacturer_id, ROW_NUMBER() OVER (PARTITION BY Manufacturer_id ORDER BY NEWID()) AS 'VendorOrder', MSRP, MemberPrice, FamilyImageName from GeneralCategory gc inner join GeneralCategoriesToProductCategories gctpc ON gc.Id=gctpc.GeneralCategory_Id inner join ProductCategoryToProduct pctp on gctpc.ProductCategory_Id = pctp.ProductCategory_Id inner join Product p on p.Id = pctp.Product_Id inner join StoreToProduct ctp on p.Id = ctp.Product_id where gc.Id = @GeneralCategory and ctp.Store_id=@StoreId and p.Active=1 and p.MemberPrice >0) p inner join Manufacturer m on m.Id = p.Manufacturer_id where VendorOrder <=6 order by NEWID() SET ROWCOUNT 0 (I've tried to somewhat format it to make it cleaner, but I don't think it really helps) Running this query with an execution plan shows that for the majority of these tables, it's doing a Clustered Index Seek. There are two operations that take up roughly 90% of the time: Index Seek (Nonclustered) on StoreToProduct: 17%. This table just contains the key of the store, and the key of the product. It seems that NHibernate decided not to make a composite key when making this table, but I'm not concerned about this at this point, as compared to the other seek... Clustered Index Seek on Product: 69%. I really have no clue how I could make this one more performant. On categories without a lot of products, performance is acceptable (<50ms), however larger categories can take a few hundred ms, with the largest category taking 3s (which has about 170k products). It seems I have two ways to go from this point: Somehow optimize the existing query and table indices to lower the query time. As almost every expensive operation is already a clustered index scan, I don't know what could be done there. The inner query could be tuned to not return all of the possible rows for that category, but I am unsure how to do this, and maintain the requirements (random products, with a good mix of manufacturers) Denormalize this data for the purpose of this query when doing the once a week import. However, I am unsure how to do this and maintain the requirements. Does anyone have any input on either of these items?

    Read the article

  • Proving What You are Worth

    - by Ted Henson
    Here is a challenge for everyone. Just about everyone has been asked to provide or calculate the Return on Investment (ROI), so I will assume everyone has a method they use. The problem with stopping once you have an ROI is that those in the C-Suite probably do not care about the ROI as much as Return on Equity (ROE). Shareholders are mostly concerned with their return on the money the invested. Warren Buffett looks at ROE when deciding whether to make a deal or not. This article will outline how you can add more meaning to your ROI and show how you can potentially enhance the ROE of the company.   First I want to start with a base definition I am using for ROI and ROE. Return on investment (ROI) and return on equity (ROE) are ways to measure management effectiveness, parts of a system of measures that also includes profit margins for profitability, price-to-earnings ratio for valuation, and various debt-to-equity ratios for financial strength. Without a set of evaluation metrics, a company's financial performance cannot be fully examined by investors. ROI and ROE calculate the rate of return on a specific investment and the equity capital respectively, assessing how efficient financial resources have been used. Typically, the best way to improve financial efficiency is to reduce production cost, so that will be the focus. Now that the challenge has been made and items have been defined, let’s go deeper. Most research about implementation stops short at system start-up and seldom addresses post-implementation issues. However, we know implementation is a continuous improvement effort, and continued efforts after system start-up will influence the ultimate success of a system.   Most UPK ROI’s I have seen only include the cost savings in developing the training material. Some will also include savings based on reduced Help Desk calls. Using just those values you get a good ROI. To get an ROE you need to go a little deeper. Typically, the best way to improve financial efficiency is to reduce production cost, which is the purpose of implementing/upgrading an enterprise application. Let’s assume the new system is up and running and all users have been properly trained and are comfortable using the system. You provide senior management with your ROI that justifies the original cost. What you want to do now is develop a good base value to a measure the current efficiency. Using usage tracking you can look for various patterns. For example, you may find that users that are accessing UPK assistance are processing a procedure, such as entering an order, 5 minutes faster than those that don’t.  You do some research and discover each minute saved in processing a claim saves the company one dollar. That translates to the company saving five dollars on every transaction. Assuming 100,000 transactions are performed a year, and all users improve their performance, the company will be saving $500,000 a year. That $500,000 can be re-invested, used to reduce debt or paid to the shareholders.   With continued refinement during the life cycle, you should be able to find ways to reduce cost. These are the type of numbers and productivity gains that senior management and shareholders want to see. Being able to quantify savings and increase productivity may also help when seeking a raise or promotion.

    Read the article

  • Get More From Your Service Request

    - by Get Proactive Customer Adoption Team
    Leveraging Service Request Best Practices Use best practices to get there faster. In the daily conversations I have with customers, they sometimes express frustration over their Service Requests. They often feel powerless to make needed changes, so their sense of frustration grows. To help you avoid some of the frustration you might feel in dealing with your Service Requests (SR), here are a few pointers that come from our best practice discussions. Be proactive. If you can anticipate some of the questions that Support will ask, or the information they may need, try to provide this up front, when you log the SR. This could be output from the Remote Diagnostic Agent (RDA), if this is a database issue, or the output from another diagnostic tool, if you’re an EBS customer. Any information you can supply that helps us understand the situation better, helps us resolve the issue sooner. As you use some of these tools proactively, you might even find the solution to the problem before you log an SR! Be right. Make sure you have the correct severity level. Since you select the initial severity level, it’s easy to accept the default without considering how significant this may be. Business impact is the driving factor, so make sure you take a moment to select the severity level that is appropriate to the situation. Also, make sure you ask us to change the severity level, should the situation dictate. Be responsive! If this is an important issue to you, quickly follow up on any action plan submitted to you by Oracle Support. The support engineer assigned to your Service Request will be able to move the issue forward more aggressively when they have the needed information. This is crucial in resolving your issues in a timely manner. Be thorough. If there are five questions in the action plan, make sure you provide an answer for all five questions in one response, rather than trickling them in one at a time. This will allow the engineer to look at all of the information as a whole and to avoid multiple trips to your SR, saving valuable time and getting you a resolution sooner. Be your own advocate! You know your situation best; make sure Oracle Support understands both how and why this issue is important to you and your company. Use the escalation process if you're concerned that your SR isn't going the right direction, the right pace, or through the right person. Don't wait until you're frustrated and angry. An escalation is as simple as a quick conversation on the phone and can be amazingly effective in getting your issues back on track. The support manager you speak with is empowered to make any needed changes. Be our partner. You can make your support experience better. When your SR has been resolved, you may receive a survey request. This is intended to get your feedback about how your SR went and what we can do to improve your overall support experience. Oracle Support is here to help you. Our goal with any Service Request is to provide the best possible solution as quickly as possible. With your help, we’ll be able to do this with your Service Request too.  

    Read the article

  • Is Your Company Social on the Inside?

    - by Mike Stiles
    As we talk about the extension of social from an outbound-facing marketing tool to a platform that will reach across the entire enterprise, servicing multiple functions of that enterprise, it might be time to take a look at how social can be effectively employed for internal communications. Remember the printed company newsletter? Yeah, nobody reads it. Remember the emailed company newsletter? Yeah, nobody reads it. Why not? Shouldn’t your employees care about the company more than anything else in life and be voraciously hungry for any information related to it? The more realistic prospect is that a company’s employees don’t behave much differently at work where information is concerned than they do in their personal lives. They “tune in” to information that’s immediately relevant to them, that peaks their interest, and/or that’s presented in a visually engaging way. That currently makes an internal social platform the most ideal way to communicate within the organization. It not only facilitates more immediate, more targeted (and thus more relevant) messaging from the company out to employees, it sets a stage for employees to communicate with each other and efficiently get answers to questions from peers. It’s a collaboration tool on steroids. If you build such an internal social portal and you do it right, will employees use it? Considering social media has officially been declared more addictive than cigarettes, booze and sex…probably. But what does it mean to do an internal social platform “right”? The bar has been set pretty high. Your employees are used to Twitter and Facebook, and would roll their eyes at anything less simple or harder to navigate than those. All the Facebook best practices would apply to your internal social as well, including the importance of managing posting frequency, using photos and video, moderation & response, etc. And don’t worry, you won’t be the first to jump in. WPP's global digital agency Possible has its own social network called Colab. Nestle has “The Nest.” Red Robin’s got one. I myself got an in-depth look at McGraw-Hill’s internal social platform at Blogwell NYC. Some of these companies are building their own platforms, others are buying them off the shelf or customizing readymade solutions. But you won’t be the last either. Prescient Digital Media and the IABC learned 39% of companies don’t offer employees any social tools. Not a social network, not discussion forums, not even IM. And a great many continue to ban the use of Facebook and Twitter on the premises. That’s pretty astonishing since social has become as essential a modern day communications tool as the telephone. But such holdouts will pay a big price for being mired in fear while competitors exploit social connections unchallenged. Fish where the fish are. If social has become the way people communicate and take in information, let that be the way communication is trafficked in the organization.

    Read the article

  • Multiple Zend application code organisation

    - by user966936
    For the past year I have been working on a series of applications all based on the Zend framework and centered on a complex business logic that all applications must have access to even if they don't use all (easier than having multiple library folders for each application as they are all linked together with a common center). Without going into much detail about what the project is specifically about, I am looking for some input (as I am working on the project alone) on how I have "grouped" my code. I have tried to split it all up in such a way that it removes dependencies as much as possible. I'm trying to keep it as decoupled as I logically can, so in 12 months time when my time is up anyone else coming in can have no problem extending on what I have produced. Example structure: applicationStorage\ (contains all applications and associated data) applicationStorage\Applications\ (contains the applications themselves) applicationStorage\Applications\external\ (application grouping folder) (contains all external customer access applications) applicationStorage\Applications\external\site\ (main external customer access application) applicationStorage\Applications\external\site\Modules\ applicationStorage\Applications\external\site\Config\ applicationStorage\Applications\external\site\Layouts\ applicationStorage\Applications\external\site\ZendExtended\ (contains extended Zend classes specific to this application example: ZendExtended_Controller_Action extends zend_controller_Action ) applicationStorage\Applications\external\mobile\ (mobile external customer access application different workflow limited capabilities compared to full site version) applicationStorage\Applications\internal\ (application grouping folder) (contains all internal company applications) applicationStorage\Applications\internal\site\ (main internal application) applicationStorage\Applications\internal\mobile\ (mobile access has different flow and limited abilities compared to main site version) applicationStorage\Tests\ (contains PHP unit tests) applicationStorage\Library\ applicationStorage\Library\Service\ (contains all business logic, services and servicelocator; these are completely decoupled from Zend framework and rely on models' interfaces) applicationStorage\Library\Zend\ (Zend framework) applicationStorage\Library\Models\ (doesn't know services but is linked to Zend framework for DB operations; contains model interfaces and model datamappers for all business objects; examples include Iorder/IorderMapper, Iworksheet/IWorksheetMapper, Icustomer/IcustomerMapper) (Note: the Modules, Config, Layouts and ZendExtended folders are duplicated in each application folder; but i have omitted them as they are not required for my purposes.) For the library this contains all "universal" code. The Zend framework is at the heart of all applications, but I wanted my business logic to be Zend-framework-independent. All model and mapper interfaces have no public references to Zend_Db but actually wrap around it in private. So my hope is that in the future I will be able to rewrite the mappers and dbtables (containing a Models_DbTable_Abstract that extends Zend_Db_Table_Abstract) in order to decouple my business logic from the Zend framework if I want to move my business logic (services) to a non-Zend framework environment (maybe some other PHP framework). Using a serviceLocator and registering the required services within the bootstrap of each application, I can use different versions of the same service depending on the request and which application is being accessed. Example: all external applications will have a service_auth_External implementing service_auth_Interface registered. Same with internal aplications with Service_Auth_Internal implementing service_auth_Interface Service_Locator::getService('Auth'). I'm concerned I may be missing some possible problems with this. One I'm half-thinking about is a config.ini file for all externals, then a separate application config.ini overriding or adding to the global external config.ini. If anyone has any suggestions I would be greatly appreciative. I have used contextswitching for AJAX functions within the individual applications, but there is a big chance both external and internal will get web services created for them. Again, these will be separated due to authorization and different available services. \applicationstorage\Applications\internal\webservice \applicationstorage\Applications\external\webservice

    Read the article

  • Postfix - am I sending spam?

    - by olrehm
    today I received like 30 messages within 5 minutes telling me that some mail I send could not be delivered, mostly to *.ru email addresses which I did not send any mail to. I have my own webserver (postfix/dovecot) set up using this guide (http://workaround.org/ispmail/lenny) but adjusted a little bit for Ubuntu. I tested whether I am an Open Relay which I am apparently not. Now there are two possible reasons for the above mentioned emails: Either I am sending out spam, or somebody wants me to think that, correct? How can I check this? I selected one particular address that I supposedly send spam to. Then I searched my mail.log for this entry. I found two blocks that record that somebody from the server connected to my server and delivered some message to two different users. I cannot find an entry reporting that anyone from my server send an email to that server. Does this mean its just some mail to scare me or could it still have been send by me in the first place? Here is one such block from the log (I replaced some confidential stuff): Jun 26 23:23:28 mycustomernumber postfix/smtpd[29970]: connect from mx.webstyle.ru[195.144.251.97] Jun 26 23:23:29 mycustomernumber postfix/smtpd[29970]: 044991528995: client=mx.webstyle.ru[195.144.251.97] Jun 26 23:23:29 mycustomernumber postfix/cleanup[29974]: 044991528995: message-id=<[email protected]> Jun 26 23:23:29 mycustomernumber postfix/qmgr[3369]: 044991528995: from=<>, size=2198, nrcpt=1 (queue active) Jun 26 23:23:29 mycustomernumber amavis[28598]: (28598-11) ESMTP::10024 /var/lib/amavis/tmp/amavis-20110626T223137-28598: <> -> <[email protected]> SIZE=2198 Received: from mycustomernumber.stratoserver.net ([127.0.0.1]) by localhost (rehmsen.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP for <[email protected]>; Sun, 26 Jun 2011 23:23:29 +0200 (CEST) Jun 26 23:23:29 mycustomernumber amavis[28598]: (28598-11) Checking: YakjkrdFq6A8 [195.144.251.97] <> -> <[email protected]> Jun 26 23:23:29 mycustomernumber postfix/smtpd[29970]: disconnect from mx.webstyle.ru[195.144.251.97] Jun 26 23:23:29 mycustomernumber amavis[28598]: (28598-11) lookup_sql_field(id) (WARN: no such field in the SQL table), "[email protected]" result=undef Jun 26 23:23:32 mycustomernumber postfix/smtpd[29979]: connect from localhost.localdomain[127.0.0.1] Jun 26 23:23:32 mycustomernumber postfix/smtpd[29979]: 0A1FA1528A21: client=localhost.localdomain[127.0.0.1] Jun 26 23:23:32 mycustomernumber postfix/cleanup[29974]: 0A1FA1528A21: message-id=<[email protected]> Jun 26 23:23:32 mycustomernumber postfix/qmgr[3369]: 0A1FA1528A21: from=<>, size=2841, nrcpt=1 (queue active) Jun 26 23:23:32 mycustomernumber postfix/smtpd[29979]: disconnect from localhost.localdomain[127.0.0.1] Jun 26 23:23:32 mycustomernumber amavis[28598]: (28598-11) FWD via SMTP: <> -> <[email protected]>,BODY=7BIT 250 2.0.0 Ok, id=28598-11, from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as 0A1FA1528A21 Jun 26 23:23:32 mycustomernumber amavis[28598]: (28598-11) Passed CLEAN, [195.144.251.97] [195.144.251.97] <> -> <[email protected]>, Message-ID: <[email protected]>, mail_id: YakjkrdFq6A8, Hits: 2.249, size: 2197, queued_as: 0A1FA1528A21, 2882 ms Jun 26 23:23:32 mycustomernumber postfix/smtp[29975]: 044991528995: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:10024, delay=3.3, delays=0.39/0.01/0.01/2.9, dsn=2.0.0, status=sent (250 2.0.0 Ok, id=28598-11, from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as 0A1FA1528A21) Jun 26 23:23:32 mycustomernumber postfix/qmgr[3369]: 044991528995: removed Jun 26 23:23:33 mycustomernumber postfix/smtp[29980]: 0A1FA1528A21: to=<[email protected]>, orig_to=<[email protected]>, relay=mx3.hotmail.com[65.54.188.110]:25, delay=1.2, delays=0.15/0.02/0.51/0.55, dsn=2.0.0, status=sent (250 <[email protected]> Queued mail for delivery) Jun 26 23:23:33 mycustomernumber postfix/qmgr[3369]: 0A1FA1528A21: removed Jun 26 23:26:49 mycustomernumber postfix/anvil[29972]: statistics: max connection rate 1/60s for (smtp:195.144.251.97) at Jun 26 23:23:28 Jun 26 23:26:49 mycustomernumber postfix/anvil[29972]: statistics: max connection count 1 for (smtp:195.144.251.97) at Jun 26 23:23:28 Jun 26 23:26:49 mycustomernumber postfix/anvil[29972]: statistics: max cache size 1 at Jun 26 23:23:28 I can provide more info if you tell me what you need to know. Thank you for you help!

    Read the article

  • Teamviewer: cannot control monitor 1, but can control monitor 2

    - by DaveT
    I'm using the web client of Teamviewer from my work computer trying to control my home computer. I have 2 monitors on the remote desktop, but for some reason only have control on the second monitor. When I switch to the main monitor (monitor 1), I cannot do anything and cannot even move the cursor. But I have no issues when I switch over to the second monitor (monitor 2). I used to have no issues with either, but in the past couple of months this has been causing me issues. Anyone have a suggestion? Thanks!! Also... Here is the log from the Teamviewer session. Showing me switching back and forth between the monitors. (just in case this will help). I had to remove the links in order to post the log since I don't have enough reputation points, but they were just teamviewer login weblinks. =============================================================================== 21.08 16:00:41,176: Version: 9.0.15099 21.08 16:00:41,177: Sandbox: remote 21.08 16:00:41,177: SysLanguage: en 21.08 16:00:41,177: VarLanguage: en 21.08 16:00:41,177: Flash Player: PlugIn (WIN 14,0,0,179) 21.08 16:00:41,178: UseLanguage: en 21.08 16:00:41,178: UseLanguage: en 21.08 16:00:41,182: TeamViewer hasPassword: true 21.08 16:00:41,418: ExternalConnect id=910035824 21.08 16:00:41,419: CT connect 910035824 masterURL: , sandbox = remote 21.08 16:00:41,425: MC.requestRoute(910035824) 21.08 16:00:41,426: MC.sendMasterCommand text=F=RequestRoute2&ID1=777&Client=TV& ID2=910035824&SA_AccountID=26641022&SA_PasswordMD5HashBase64Encoded=& SA_SessionSecret=f7H6Z7SYfX5ahQ7SJq/r/K20PBYg9fOZhp+DKLhf5ts=&SA_SessionID=1558929948& V=9.0.15099&OS=Flash 21.08 16:00:41,426: MC wait for ping completion 21.08 16:00:42,064: PS.socket event: [Event type="connect" bubbles=false cancelable=false eventPhase=2] 21.08 16:00:42,182: PingThread: TCP-Ping ok 21.08 16:00:42,183: MC.socket mode = TCP, MasterURL: 21.08 16:00:42,183: MC.connect: 21.08 16:00:43,058: PS.socket event: [Event type="connect" bubbles=false cancelable=false eventPhase=2] 21.08 16:00:43,058: MC.connectHandler: [Event type="connect" bubbles=false cancelable=false eventPhase=2] 21.08 16:00:43,236: MC.requestRouteResponse: [email protected]_10800_128000_762319420_910035824_10000__1_0_16778176_128000_16778176: 128000;2147483647:1280000;4:640000_786297_786297 21.08 16:00:43,239: CT init socket: TCP 21.08 16:00:43,513: PS.socket event: [Event type="connect" bubbles=false cancelable=false eventPhase=2] 21.08 16:00:43,514: CT.connectHandler: [Event type="connect" bubbles=false cancelable=false eventPhase=2] 21.08 16:00:43,519: Browser name: Netscape 21.08 16:00:43,936: CMD_IDENTIFY id=910035824 ver=2.41 21.08 16:00:44,666: CMD_CONFIRMENCRYPTION: encryption confirmed 21.08 16:00:44,667: Started resendrequest timer 21.08 16:00:45,063: Remote Version: TV 009.000 21.08 16:00:45,501: start classic authentication 21.08 16:00:45,502: Login::SendRequestToConsole(): url= 21.08 16:00:45,828: start srp authentication 21.08 16:00:46,983: checkFirstPacket ok, m_LastReceivedPacketID =4 21.08 16:00:47,148: Login::SendRequestToConsole(): url= 21.08 16:00:47,478: start srp authentication 21.08 16:00:48,210: Login::SendRequestToConsole(): url= 21.08 16:00:48,485: checkFirstPacket ok, m_LastReceivedPacketID =7 21.08 16:00:48,780: TVCmdAuthenticate_Authenticated: 1 21.08 16:00:49,321: Connected to 910035824, name=NEWMAN, os=14, version=9.0.31064 21.08 16:00:49,329: ConnectionAccessSettings: RemoteControl: AllowedFileTransfer: AllowedControlRemoteTV: AllowedSwitchSides: DeniedAllowDisableRemoteInput: AllowedAllowVPN: AllowedAllowPartnerViewDesktop: Allowed 21.08 16:00:52,195: unexpected TVCommand.CommandType == 56 21.08 16:00:52,231: CW received display params: 1680x1050x8 monitors: 2 (active:0) 21.08 16:00:52,301: Caching active, version=2 21.08 16:03:47,158: CW received display params: 1680x1050x8 monitors: 2 (active:1) 21.08 16:04:24,447: CW received display params: 1680x1050x8 monitors: 2 (active:0) 21.08 16:04:40,609: CW received display params: 3360x1050x8 monitors: 2 (active:-1) 21.08 16:04:59,802: CW received display params: 1680x1050x8 monitors: 2 (active:1) 21.08 16:04:59,933: CW received display params: 1680x1050x8 monitors: 2 (active:1) 21.08 16:05:58,419: CW received display params: 1680x1050x8 monitors: 2 (active:0) 21.08 16:06:36,824: CW received display params: 1680x1050x8 monitors: 2 (active:1) 21.08 16:07:07,232: CW received display params: 1680x1050x8 monitors: 2 (active:0)

    Read the article

  • OpenVPN: Connection established but can’t connect to server

    - by Maik
    I am trying to set up OpenVPN to allow me to connect a number of laptops to my network in a way that allows the laptops to connect to specific computers via HTTP (to e.g. a server management page) and windows shares (to access files) In the test environment my laptops live in a network with a 192.168.1.X address range. The host-network has a 10.66.77.X address range The server hosting the OpenVPN server has address 10.77.10.20. I need to access some application server web pages on this machine, accessible on various ports The server with the windows shares as well as some other web based pages I need to access is on address 10.66.77.20 The config files for server and laptop are attached below. The laptop establishes the VPN connection without problems, but I cannot access any of the machines, even a simple ping fails. Maybe a routing problem? The routing table for the laptop is shown below as well - every idea is appreciated! Thanks! Maik Server config file port 1194 dev tun tls-server ca /etc/openvpn/keys/ca.crt cert /etc/openvpn/keys/projects.crt key /etc/openvpn/keys/projects.key dh /etc/openvpn/keys/dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "route 10.66.77.0 255.255.255.0" keepalive 10 60 inactive 600 route 10.8.0.1 255.255.255.0 user openvpn group openvpn persist-tun persist-key verb 4 client config file dev tun proto udp remote SERVERADDR 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert accountingLaptop.crt key accountingLaptop.key ns-cert-type server comp-lzo verb 3 Resulting routing table on client laptop C:\Documents and Settings\User>route print =========================================================================== Interface List 0x1 ........................... MS TCP Loopback interface 0x2 ...00 23 5a 9b 64 9b ...... Atheros AR8132 PCI-E Fast Ethernet Controller - Packet Scheduler Miniport 0x3 ...00 24 2c 35 c9 6b ...... Dell Wireless 1395 WLAN Mini-Card - Packet Sched uler Miniport 0x4 ...00 ff 5e 03 43 9b ...... TAP-Win32 Adapter V9 - Packet Scheduler Miniport =========================================================================== =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.129 25 10.8.0.1 255.255.255.255 10.8.0.5 10.8.0.6 1 10.8.0.4 255.255.255.252 10.8.0.6 10.8.0.6 30 10.8.0.6 255.255.255.255 127.0.0.1 127.0.0.1 30 10.66.77.0 255.255.255.0 10.8.0.5 10.8.0.6 1 10.255.255.255 255.255.255.255 10.8.0.6 10.8.0.6 30 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 192.168.1.0 255.255.255.0 192.168.1.129 192.168.1.129 25 192.168.1.129 255.255.255.255 127.0.0.1 127.0.0.1 25 192.168.1.255 255.255.255.255 192.168.1.129 192.168.1.129 25 224.0.0.0 240.0.0.0 10.8.0.6 10.8.0.6 30 224.0.0.0 240.0.0.0 192.168.1.129 192.168.1.129 25 255.255.255.255 255.255.255.255 10.8.0.6 2 1 255.255.255.255 255.255.255.255 10.8.0.6 10.8.0.6 1 255.255.255.255 255.255.255.255 192.168.1.129 192.168.1.129 1 Default Gateway: 192.168.1.1 =========================================================================== Persistent Routes: None

    Read the article

  • Hundreds of unknown entries in Linux logwatch

    - by Saif Bechan
    I have a dedicated server which runs centos. Today i got an email from loginwatch on my server with hundreds of lines of 'errors'. I don't really know what they are becasue i am fairly new at this. The lines are in a few sections, I will display the first 10 of all of them, i hope someone can help me fix these problems. --------------------- Named Begin ------------------------ **Unmatched Entries** client 216.146.46.136 notify question section contains no SOA: 8 Time(s) client 92.114.98.10 query (cache) 'adobe.com/A/IN' denied: 4 Time(s) network unreachable resolving '11.254.75.75.in-addr.arpa/PTR/IN': 2001:7fd::1#53: 1 Time(s) network unreachable resolving '136.176.97.93.in-addr.arpa/PTR/IN': 2001:13c7:7002:3000::11#53: 1 Time(s) network unreachable resolving '136.176.97.93.in-addr.arpa/PTR/IN': 2001:500:13::c7d4:35#53: 1 Time(s) network unreachable resolving '136.176.97.93.in-addr.arpa/PTR/IN': 2001:500:2e::1#53: 2 Time(s) network unreachable resolving '136.176.97.93.in-addr.arpa/PTR/IN': 2001:610:240:0:53::193#53: 1 Time(s) network unreachable resolving '136.176.97.93.in-addr.arpa/PTR/IN': 2001:610:240:0:53::3#53: 1 Time(s) network unreachable resolving '136.176.97.93.in-addr.arpa/PTR/IN': 2001:660:3006:1::1:1#53: 1 Time(s) network unreachable resolving '136.176.97.93.in-addr.arpa/PTR/IN': 2001:6b0:7::2#53: 1 Time(s) network unreachable resolving '136.176.97.93.in-addr.arpa/PTR/IN': 2001:dc0:1:0:4777::140#53: 1 Time(s) network unreachable resolving '136.176.97.93.in-addr.arpa/PTR/IN': 2001:dc0:2001:a:4608::59#53: 1 Time(s) network unreachable resolving '146.250.19.67.in-addr.arpa/PTR/IN': 2001:5a0:10::2#53: 1 Time(s) network unreachable resolving '149.207.106.87.in-addr.arpa/PTR/IN': 2001:7fd::1#53: 1 Time(s) network unreachable resolving '178.62.24.195.in-addr.arpa/PTR/IN': 2001:7fd::1#53: 1 Time(s) this goes on for hundreds of lines with all different domain names. --------------------- pam_unix Begin ------------------------ Failed logins from: 78.86.126.211 (78-86-126-211.zone2.bethere.co.uk): 111 times 93.97.176.136 (93-97-176-136.dsl.cnl.uk.net): 113 times 121.14.145.32: 136 times 190.152.69.5: 248 times 209.160.72.15: 572 times 210.26.48.35: 2 times 212.235.111.224 (DSL212-235-111-224.bb.netvision.net.il): 140 times 218.206.25.29: 140 times Illegal users from: 78.86.126.211 (78-86-126-211.zone2.bethere.co.uk): 2665 times 93.97.176.136 (93-97-176-136.dsl.cnl.uk.net): 2539 times 121.14.145.32: 116 times 190.152.69.5: 34 times 209.160.72.15: 324 times 218.206.25.29: 8051 times proftpd: Unknown Entries: session opened for user cent_ftp by (uid=0): 15 Time(s) session closed for user cent_ftp: 14 Time(s) sshd: Authentication Failures: unknown (218.206.25.29): 8051 Time(s) unknown (78-86-126-211.zone2.bethere.co.uk): 2665 Time(s) unknown (93.97.176.136): 2539 Time(s) root (209.160.72.15): 558 Time(s) unknown (209.160.72.15): 324 Time(s) root (190.152.69.5): 246 Time(s) unknown (121.14.145.32): 116 Time(s) root (121.14.145.32): 106 Time(s) root (dsl212-235-111-224.bb.netvision.net.il): 70 Time(s) root (93.97.176.136): 44 Time(s) root (78-86-126-211.zone2.bethere.co.uk): 37 Time(s) unknown (190.152.69.5): 34 Time(s) mysql (121.14.145.32): 30 Time(s) nobody (218.206.25.29): 26 Time(s) mail (218.206.25.29): 24 Time(s) news (218.206.25.29): 24 Time(s) root (218.206.25.29): 24 Time(s) --------------------- SSHD Begin ------------------------ **Unmatched Entries** pam_succeed_if(sshd:auth): error retrieving information about user tavi : 2 time(s) pam_succeed_if(sshd:auth): error retrieving information about user pam : 2 time(s) pam_succeed_if(sshd:auth): error retrieving information about user konchog : 1 time(s) pam_succeed_if(sshd:auth): error retrieving information about user stavrum : 2 time(s) pam_succeed_if(sshd:auth): error retrieving information about user rachel : 1 time(s) pam_succeed_if(sshd:auth): error retrieving information about user affiliates : 24 time(s) pam_succeed_if(sshd:auth): error retrieving information about user nen : 1 time(s) pam_succeed_if(sshd:auth): error retrieving information about user cobra : 1 time(s) pam_succeed_if(sshd:auth): error retrieving information about user pass : 7 time(s) pam_succeed_if(sshd:auth): error retrieving information about user hacer : 1 time(s) pam_succeed_if(sshd:auth): error retrieving information about user chung : 1 time(s) pam_succeed_if(sshd:auth): error retrieving information about user zainee : 1 time(s) pam_succeed_if(sshd:auth): error retrieving information about user radu : 2 time(s) pam_succeed_if(sshd:auth): error retrieving information about user alka : 4 time(s) pam_succeed_if(sshd:auth): error retrieving information about user albert : 5 time(s) pam_succeed_if(sshd:auth): error retrieving information about user turcia : 2 time(s) pam_succeed_if(sshd:auth): error retrieving information about user cordell : 2 time(s) pam_succeed_if(sshd:auth): error retrieving information about user silver : 2 time(s) pam_succeed_if(sshd:auth): error retrieving information about user dragon : 3 time(s) If someone wants to see the whole log i can upload it somewhere. Am i being hacked, what is this all?? I hope someone can help me, this does not look good at all.

    Read the article

  • Weird networking problem ( Linksys, Windows 7 )

    - by Rohit Nair
    Okay it's a bit tough to figure out where to start from, but here is the basic summary of the issue: During general internet usage, there are times when any attempt to visit a website stalls at "Waiting for somedomain.com". This problem occurs in Firefox, IE and Chrome. No website will load, INCLUDING the router configuration page at 192.168.1.1. Curiously, ping works fine, and other network apps such as MSN Messenger continue to work and I can send and receive messages. Disconnecting and reconnecting to the wireless network seems to fix the problem for a bit, but there are times when it relapses into not loading after every 2-3 http requests. Restarting the router seems to fix the issue, but it can crop up hours or days later. I have a CCNA cert and I know my way around the Windows family of operating systems, so I'm going to list all the things I've tried here. Other computers on the network seem to suffer the same problem, which makes me think it might be a specific problem with something in Win7. The random nature of this issue makes it a bit difficult to confirm, but I can definitely say that I have experienced this on the following systems: Windows 7 64-bit on my desktop Windows Vista 32-bit on my desktop ( the desktop has 2 wireless NICs and the problem existed on both ) Windows Vista 32-bit on my laptop ( both with wireless and wired ) Windows XP SP3 on another laptop ( both wireless and wired ) Using Wireshark to sniff packets seemed to indicate that although HTTP requests were being SENT out, no packets were coming in to respond to the HTTP request. However, other network apps continued to work i.e I would still receive IMs on Windows Live Messenger. Disabling IPV6 had no effect. Updating router firmware to the latest stock firmware by Linksys had no effect. Switching to dd-wrt firmware had no effect. By "no effect" I mean that although the restart required by firmware updates fixed the problem at the time, it still came back. A couple of weeks back, after a LOT of googling and flipping of various options, I figured it might be a case of router slowdown ( http://www.dd-wrt.com/wiki/index.php/Router%5FSlowdown ) caused by the fact that I occasionally run a torrent client. I tried changing the configuration as suggested in that router slowdown link, and restarted the router. However I have not run the torrent client for 12 days now, and yet I still randomly experience this problem. Currently the computer I am using is running Windows 7 64-bit. I would just like to reiterate some of the reasons that I was confused by the issue. Even the router config page at 192.168.1.1 would not load, indicating that it's not a problem with the WAN link, but probably a router issue or a local computer issue. For some reason, disconnecting and reconnecting to the wireless network immediately seems to fix the problem. Updating the router firmware, even switching to open source firmware did nothing. So it seemed to be a computer issue. On the other hand, I have not seen any mass outrage of people having networking problems with Windows 7 and Linksys routers, especially a problem of this sort, and I have tweaked every network setting I could think of. Although HTTP seems to have trouble, ping works fine, DNS lookups work fine, other networking apps work fine. However if I disconnect from Windows Live Messenger and try to reconnect, it fails to reconnect. So although it could receive data over the existing TCP/IP connection, trying to start a new one failed? Does anyone have any further ideas on debugging or fixing this issue? I am reasonably certain there are no viruses or other malicious apps on my network, and I am also reasonably certain that nobody is accessing my router without my consent. Router: Linksys WRT54G2 1.0 running dd-wrt firmware Wireless Card: Alfa AWUS036H OS: Windows 7 64-bit EDIT: I tried switching to a clean wireless channel free from interference, but the problem still persisted. I tried connecting directly with a cable, but the problem still persisted. Signed A very confused and bewildered geek whose knowledge seems to be useless in the face of this frustrating network issue.

    Read the article

  • PHP crashing (seg-fault) under mod_fcgi, apache

    - by Andras Gyomrey
    I've been programming a site using: Zend Framework 1.11.5 (complete MVC) PHP 5.3.6 Apache 2.2.19 CentOS 5.6 i686 virtuozzo on vps cPanel WHM 11.30.1 (build 4) Mysql 5.1.56-log Mysqli API 5.1.56 The issue started here http://stackoverflow.com/questions/6769515/php-programming-seg-fault. In brief, php is giving me random segmentation-faults. [Wed Jul 20 17:45:34 2011] [error] mod_fcgid: process /usr/local/cpanel/cgi-sys/php5(11562) exit(communication error), get unexpected signal 11 [Wed Jul 20 17:45:34 2011] [warn] [client 190.78.208.30] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Wed Jul 20 17:45:34 2011] [error] [client 190.78.208.30] Premature end of script headers: index.php About extensions. When i compile php with "--enable-debug" flag, i have to disable this line: zend_extension="/usr/local/IonCube/ioncube_loader_lin_5.3.so" Otherwise, the server doesn't accept requests and i get a "The connection with the server was reset". It is possible that i have to disable eaccelerator too because of the same reason. I still don't get why apache gets running it some times and some others not: extension="eaccelerator.so" Anyway, after i get httpd running, seg-faults can occurr randomly. If i don't compile php with "--enable-debug" flag, i can get DETERMINISTICALLY a php crash: <?php class Admin_DbController extends Controller_BaseController { public function updateSqlDefinitionsAction() { $db = Zend_Registry::get('db'); $row = $db->fetchRow("SHOW CREATE TABLE 222AFI"); } } ?> BUT if i compile php with "--enable-debug" flag, it's really hard to get this error. I must add some complexity to make it crash. I have to be doing many paralell requests for a few seconds to get a crash: <?php class Admin_DbController extends Controller_BaseController { public function updateSqlDefinitionsAction() { $db = Zend_Registry::get('db'); $tableList = $db->listTables(); foreach ($tableList as $tableName){ $row = $db->fetchRow("SHOW CREATE TABLE " . $db->quoteIdentifier($tableName)); file_put_contents( DB_DEFINITIONS_PATH . '/' . $tableName . '.sql', $row['Create Table'] . ';' ); } } } ?> Please notice this is the same script, but creating DDL for all tables in database rather than for one. It seems that if php is heavy loaded (with extensions and me doing many paralell requests) it's when i get php to crash. About starting httpd with "-X": i've tried. The thing is, it is already hard to make php crash with --enable-debug. With "-X" option (which only enables one child process) i can't do parallel requests. So i haven't been able to create to proper debug backtrace: https://bugs.php.net/bugs-generating-backtrace.php My concrete question is, what do i do to get a coredump? root@GWT4 [~]# httpd -V Server version: Apache/2.2.19 (Unix) Server built: Jul 20 2011 19:18:58 Cpanel::Easy::Apache v3.4.2 rev9999 Server's Module Magic Number: 20051115:28 Server loaded: APR 1.4.5, APR-Util 1.3.12 Compiled using: APR 1.4.5, APR-Util 1.3.12 Architecture: 32-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/usr/local/apache" -D SUEXEC_BIN="/usr/local/apache/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf"

    Read the article

  • Ubuntu 9.10 and Squid 2.7 Transparent Proxy TCP_DENIED

    - by user38400
    Hi, We've spent the last two days trying to get squid 2.7 to work with ubuntu 9.10. The computer running ubuntu has two network interfaces: eth0 and eth1 with dhcp running on eth1. Both interfaces have static ip's, eth0 is connected to the Internet and eth1 is connected to our LAN. We have followed literally dozens of different tutorials with no success. The tutorial here was the last one we did that actually got us some sort of results: http://www.basicconfig.com/linuxnetwork/setup_ubuntu_squid_proxy_server_beginner_guide. When we try to access a site like seriouswheels.com from the LAN we get the following message on the client machine: ERROR The requested URL could not be retrieved Invalid Request error was encountered while trying to process the request: GET / HTTP/1.1 Host: www.seriouswheels.com Connection: keep-alive User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/532.9 (KHTML, like Gecko) Chrome/5.0.307.11 Safari/532.9 Cache-Control: max-age=0 Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,/;q=0.5 Accept-Encoding: gzip,deflate,sdch Cookie: __utmz=88947353.1269218405.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); __qca=P0-1052556952-1269218405250; __utma=88947353.1027590811.1269218405.1269218405.1269218405.1; __qseg=Q_D Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Some possible problems are: Missing or unknown request method. Missing URL. Missing HTTP Identifier (HTTP/1.0). Request is too large. Content-Length missing for POST or PUT requests. Illegal character in hostname; underscores are not allowed. Your cache administrator is webmaster. Below are all the configuration files: /etc/squid/squid.conf, /etc/network/if-up.d/00-firewall, /etc/network/interfaces, /var/log/squid/access.log. Something somewhere is wrong but we cannot figure out where. Our end goal for all of this is the superimpose content onto every page that a client requests on the LAN. We've been told that squid is the way to do this but at this point in the game we are just trying to get squid setup correctly as our proxy. Thanks in advance. squid.conf acl all src all acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl localnet src 192.168.0.0/24 acl SSL_ports port 443 # https acl SSL_ports port 563 # snews acl SSL_ports port 873 # rsync acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access allow localnet http_access deny all icp_access allow localnet icp_access deny all http_port 3128 hierarchy_stoplist cgi-bin ? cache_dir ufs /var/spool/squid/cache1 1000 16 256 access_log /var/log/squid/access.log squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern (Release|Package(.gz)*)$ 0 20% 2880 refresh_pattern . 0 20% 4320 acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9] upgrade_http0.9 deny shoutcast acl apache rep_header Server ^Apache broken_vary_encoding allow apache extension_methods REPORT MERGE MKACTIVITY CHECKOUT cache_mgr webmaster cache_effective_user proxy cache_effective_group proxy hosts_file /etc/hosts coredump_dir /var/spool/squid access.log 1269243042.740 0 192.168.1.11 TCP_DENIED/400 2576 GET NONE:// - NONE/- text/html 00-firewall iptables -F iptables -t nat -F iptables -t mangle -F iptables -X echo 1 | tee /proc/sys/net/ipv4/ip_forward iptables -t nat -A POSTROUTING -j MASQUERADE iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3128 networking auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 142.104.109.179 netmask 255.255.224.0 gateway 142.104.127.254 auto eth1 iface eth1 inet static address 192.168.1.100 netmask 255.255.255.0

    Read the article

  • Hosting a subversion working copy in an remote WebDAV folder

    - by Daniel Baulig
    This might be a bit awkward, but I'll try to explain what I am trying to achieve and what problems I encountered. First of all: whats this about? I am currently trying to set up a distributed working enviroment for developing a web page. My plan was to setup a SVN repository for version control, a live server where the actual live page ist hosted and a development server where I can work on the page. To ease things I intended to not have a local copy of the project on my disk, but to actually work directy on the files, that the development server hosts. For that I setup a WebDAV directory, under devserver.com/workspace, that actually mapped to files served under devserver.com/. So I could connect to devserver.com/workspace, change something and view the results live at devserver.com/. So far this worked perfectly. The next step was to create a SVN repository that would take care of my version control. I intended to be able to checkin to the reposiroty from my development server and at any time, with a small shell script, deploy any revision from the svn to the live server by checking out a copy of the revision into the live server directories. The second part, checking out into the live server, also worked perfectly. The first part though is where problems arose: My workstation is a Windows 7 machine. I connected to the WebDAV share using Windows built-in WebDAV support, which worked quite well. I can create, move, delete, edit, whatever files on my WebDAV share from my Windows machine perfectly. The next step was to checkout a working copy from the SVN (actually hosted at devserver.com/subversion/) into the WebDAV share. In the first try I used the Eclipse plugin subversive. The actual checkout worked fine and I can update and commit stuff to the repository, however, I cannot add any files to the ignore list. It always brings me an error. So I tried the same thing with a complete fresh repository using TortoiseSVN - and again it failed with the same errors. Here is what it says when trying to add files to svnignore: Some of selected resources were not added to ignore. svn: Cannot rename file '\\devserver.com@SSL\DavWWWRoot\workspace\.svn\tmp\dir-props.66fd8936-2701-0010-bb76-472f0b56a5d1.tmp' to '\\devserver.com@SSL\DavWWWRoot\workspace\.svn\tmp\dir-props' This is what apache2 tells me, when I try to add a file to svnignore: [Sun Mar 07 03:54:19 2010] [error] [client xxx.xxx.xxx.xxx] Negotiation: discovered file(s) matching request: /var/www/devserver.com/.svn/tmp/dir-props (None could be negotiated). [Sun Mar 07 03:54:31 2010] [error] [client xxx.xxx.xxx.xxx] (20)Not a directory: The URL contains extraneous path components. The resource could not be identified. [400, #0] Actually both messages are repeated several times. The first one occurs first and is repeated about 5 times and the second comes there after and is repeated propably more than 20 times. If I create a regular file, delete, rename or modify it none of those messages appear in my error.log While writing this question now I was able to add fils to svnignore using TortoiseSVN. However, after that, Eclipse would not let me commit anymore. The error that used to pop up when adding files to svnignore now also shows up while commiting. While searching the web I found some people having this same message appearing because they had files only different in upper- / lower-case naming. I checked my repository and did not find such files. I also read somewhere about people having troubles with WebDAV and file locking, because WebDAV's file locking capabilities seem to be very limited. At some stage I got errors telling me my repository was locked and thus the operations could not be completed. This error though did not appear anymore, since I setup a completely fresh repository and working copy. I would really appreciate any help anyone can provide me in fixing this problem! If there are any more questions feel free to ask. I know this is a somewhat unusual setup. Best regards, Daniel

    Read the article

  • Unauthorized Access Exception using Web Deploy to Site when the site root is a UNC path

    - by Peter LaComb Jr.
    I am trying to use Web Deploy to deploy a site where the Site is rooted on a UNC path instead of a local drive. This is because I want to have a shared configuration, and have all servers point to the same UNC for content. That would allow me to deploy to one server and have all servers updated at the same time. I've created a share with everyone and users read/write. The NTFS permissions have the ID of the appDomain account as full control, and that is the same account that is configured as the specific user in Management Service Delegation. I can log on to the destination server as that ID, access the share and create/delete files. However, I'm getting the following exception in my Microsoft Web Deploy log on the destination server: User: Client IP: 192.168.62.174 Content-Type: application/msdeploy Version: 9.0.0.0 MSDeploy.VersionMin: 7.1.600.0 MSDeploy.VersionMax: 9.0.1631.0 MSDeploy.Method: Sync MSDeploy.RequestId: c060c823-cdb4-4abe-8294-5ffbdc327d2e MSDeploy.RequestCulture: en-US MSDeploy.RequestUICulture: en-US ServerVersion: 9.0.1631.0 Skip: objectName="^configProtectedData$" Provider: auto, Path: A tracing deployment agent exception occurred that was propagated to the client. Request ID 'c060c823-cdb4-4abe-8294-5ffbdc327d2e'. Request Timestamp: '8/23/2012 11:01:56 AM'. Error Details: ERROR_INSUFFICIENT_ACCESS_TO_SITE_FOLDER Microsoft.Web.Deployment.DeploymentDetailedUnauthorizedAccessException: Unable to perform the operation ("Create Directory") for the specified directory ("\someserver.mydomain.local\sharename\sitename\applicationName"). This can occur if the server administrator has not authorized this operation for the user credentials you are using. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_INSUFFICIENT_ACCESS_TO_SITE_FOLDER. --- Microsoft.Web.Deployment.DeploymentException: The error code was 0x80070005. --- System.UnauthorizedAccessException: Access to the path '\someserver.mydomain.local\sharename\sitename\applicationName' is denied. at Microsoft.Web.Deployment.NativeMethods.RaiseIOExceptionFromErrorCode(Win32ErrorCode errorCode, String maybeFullPath) at Microsoft.Web.Deployment.DirectoryEx.CreateDirectory(String path) at Microsoft.Web.Deployment.DirPathProviderBase.CreateDirectory(String fullPath, DeploymentObject source) at Microsoft.Web.Deployment.DirPathProviderBase.Add(DeploymentObject source, Boolean whatIf) --- End of inner exception stack trace --- --- End of inner exception stack trace --- at Microsoft.Web.Deployment.FilePathProviderBase.HandleKnownRetryableExceptions(DeploymentBaseContext baseContext, Int32[] errorsToIgnore, Exception e, String path, String operation) at Microsoft.Web.Deployment.DirPathProviderBase.Add(DeploymentObject source, Boolean whatIf) at Microsoft.Web.Deployment.DeploymentObject.Add(DeploymentObject source, DeploymentSyncContext syncContext) at Microsoft.Web.Deployment.DeploymentSyncContext.HandleAdd(DeploymentObject destObject, DeploymentObject sourceObject) at Microsoft.Web.Deployment.DeploymentSyncContext.HandleUpdate(DeploymentObject destObject, DeploymentObject sourceObject) at Microsoft.Web.Deployment.DeploymentSyncContext.SyncChildrenNoOrder(DeploymentObject dest, DeploymentObject source) at Microsoft.Web.Deployment.DeploymentSyncContext.SyncChildrenNoOrder(DeploymentObject dest, DeploymentObject source) at Microsoft.Web.Deployment.DeploymentSyncContext.SyncChildrenOrder(DeploymentObject dest, DeploymentObject source) at Microsoft.Web.Deployment.DeploymentSyncContext.ProcessSync(DeploymentObject destinationObject, DeploymentObject sourceObject) at Microsoft.Web.Deployment.DeploymentObject.SyncToInternal(DeploymentObject destObject, DeploymentSyncOptions syncOptions, PayloadTable payloadTable, ContentRootTable contentRootTable, Nullable1 syncPassId) at Microsoft.Web.Deployment.DeploymentAgent.HandleSync(DeploymentAgentAsyncData asyncData, Nullable1 passId) at Microsoft.Web.Deployment.DeploymentAgent.HandleRequestWorker(DeploymentAgentAsyncData asyncData) at Microsoft.Web.Deployment.DeploymentAgent.HandleRequest(DeploymentAgentAsyncData asyncData) This is shown as the following on the console of the machine where I run the deployment: C:\Users\PLaComb"C:\Program Files (x86)\IIS\Microsoft Web Deploy V3\msdeploy.exe" -source:package='C:\Packages\Deployments\applicationName.zip' -dest:auto,computerName='https://SERVERNAME:8172/msdeploy.axd',includeAcls='True' -verb:sync -disableLink:AppPoolExtension -disableLink:ContentExtension -disableLink:CertificateExtension -setParamFile:"C:\Packages\Deployments\applicationName.SetParameters.xml" -allowUntrusted Info: Using ID 'c060c823-cdb4-4abe-8294-5ffbdc327d2e' for connections to the remote server. Info: Adding sitemanifest (sitemanifest). Info: Adding virtual path (JMS/admin) Info: Adding directory (JMS/admin). Error Code: ERROR_INSUFFICIENT_ACCESS_TO_SITE_FOLDER More Information: Unable to perform the operation ("Create Directory") for the specified directory ("\someserver.mydomain.local\sharename\sitename\applicationName"). This can occur if the server administrator has not authorized this operation for the user credentials you are using. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_INSUFFICIENT_ACCESS_TO_SITE_FOLDER. Error: The error code was 0x80070005. Error: Access to the path '\someserver.mydomain.local\sharename\sitename\applicationName' is denied. Error count: 1.

    Read the article

  • RHEL - NFS4: Mounted/Exported as rw, user write permission denied

    - by brendanmac
    Hello, I have nfs4 configured between a RHEL 5.3 server (charlie) and a RHEL 5.4 client (simcom1). The machines are configured to authenticate users via kerberos by a Windows Server 2008 active directory machine called "alpha." Alpha also serves as a dns and dhcp machine for the local network. I notice that when a user logs in to a RHEL machine for the first time they are issued a unique uid to that machine; The first user to log on gets 10001. So, what I see is that users between simcom1 and charlie have different UIDs. When a user does an 'ls -la' command from within an nfs4 mount I would have thought that the usernames in the owner column would indicate 'nobody' or at least the wrong user name - since UIDs are different between the machines for each user, and not all users have logged into each machine. However, the simcom1 is able to resolve usernames in an 'ls -la' executed on files residing on charlie via nfs4 correctly. Most troubling is that users are unable to write to files across the nfs mount. The server, charlie, has the root directory exported as rw. The client, simcom1, mounts the export as rw. My configurations are shown below. My question is, how do I configure the RHEL machines to allow users to write files across nfs4 that is already mounted as read/write? [root@charlie ~]# more /etc/exports / 10.100.0.0/16(rw,no_root_squash,fsid=0) [root@charlie ~]#cat /etc/sysconfig/nfs # # Define which protocol versions mountd # will advertise. The values are "no" or "yes" # with yes being the default #MOUNTD_NFS_V1="no" #MOUNTD_NFS_V2="no" #MOUNTD_NFS_V3="no" # # # Path to remote quota server. See rquotad(8) #RQUOTAD="/usr/sbin/rpc.rquotad" # Port rquotad should listen on. #RQUOTAD_PORT=875 # Optinal options passed to rquotad #RPCRQUOTADOPTS="" # # # TCP port rpc.lockd should listen on. #LOCKD_TCPPORT=32803 # UDP port rpc.lockd should listen on. #LOCKD_UDPPORT=32769 # # # Optional arguments passed to rpc.nfsd. See rpc.nfsd(8) # Turn off v2 and v3 protocol support #RPCNFSDARGS="-N 2 -N 3" # Turn off v4 protocol support #RPCNFSDARGS="-N 4" # Number of nfs server processes to be started. # The default is 8. RPCNFSDCOUNT=8 # Stop the nfsd module from being pre-loaded #NFSD_MODULE="noload" # # # Optional arguments passed to rpc.mountd. See rpc.mountd(8) #STATDARG="" #RPCMOUNTDOPTS="" # Port rpc.mountd should listen on. #MOUNTD_PORT=892 # # # Optional arguments passed to rpc.statd. See rpc.statd(8) #RPCIDMAPDARGS="" # # Set to turn on Secure NFS mounts. SECURE_NFS="no" # Optional arguments passed to rpc.gssd. See rpc.gssd(8) #RPCGSSDARGS="-vvv" # Optional arguments passed to rpc.svcgssd. See rpc.svcgssd(8) #RPCSVCGSSDARGS="-vvv" # Don't load security modules in to the kernel #SECURE_NFS_MODS="noload" # # Don't load sunrpc module. #RPCMTAB="noload" # [root@simcom1 ~]# cat /etc/fstab --start snip-- charlie:/home /usr/local/dev/charlie nfs4 rw,nosuid, 0 0 --end snip-- [brendanmac@simcom1 /usr/local/dev/charlie/brendanmac]# touch file touch: cannot touch 'file': Permission denied [brendanmac@simcom1 /usr/local/dev/charlie/brendanmac]# su Password: [root@simcom1 /usr/local/dev/charlie/brendanmac]# touch file [root@simcom1 /usr/local/dev/charlie/brendanmac]# ls -la file -rw------- 1 root root 0 May 26 10:43 file Thank you for your assistance, Brendan

    Read the article

  • proftpd, dynamic IP, and filezilla: port troubles

    - by Yami
    The basic setup: Two computers, one running proftpd, one attempting to connect via filezilla. Both linux (xubuntu on the server, kubuntu on the client). Both are at the moment behind a router on a residential (read: dynamic IP) connection; the client is a laptop I plan to take away from the home network, so I'll need this to work externally. I have my router set up to allow specific ports forwarded to each machine and, where possible, have plugged in those numbers into proftpd (via gadmin, double-checking the config file) and filezilla. Attempting to connect via active mode using the internal IP works: Status: Connecting to 192.168.1.139:8085... Status: Connection established, waiting for welcome message... Response: 220 Crossroads FTP Command: USER <redacted> Response: 331 Password required for <redacted> Command: PASS ******* Response: 230 Anonymous access granted, restrictions apply Command: OPTS UTF8 ON Response: 200 UTF8 set to on Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is the current directory Command: TYPE I Response: 200 Type set to I Command: PORT 192,168,1,52,153,140 Response: 200 PORT command successful Command: LIST Response: 150 Opening ASCII mode data connection for file list Response: 226 Transfer complete Status: Directory listing successful Attempting to connect via the domain name, however, leads to issues; in active mode, the PORT is the last command to be received according to the server's logs, and in passive mode, it's the PASV command. This leads me to believe I'm being redirected to a bad port? Active Sample: Status: Resolving address of <url> Status: Connecting to <ip:port> Status: Connection established, waiting for welcome message... Response: 220 Crossroads FTP Command: USER <redacted> Response: 331 Password required for <redacted> Command: PASS ******* Response: 230 Anonymous access granted, restrictions apply Command: OPTS UTF8 ON Response: 200 UTF8 set to on Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is the current directory Command: TYPE I Response: 200 Type set to I Command: PORT 174,111,127,27,153,139 Response: 200 PORT command successful Command: LIST Error: Connection timed out Error: Failed to retrieve directory listing Passive sample: Status: Resolving address of ftp.bonsaiwebdesigns.com Status: Connecting to 174.111.127.27:8085... Status: Connection established, waiting for welcome message... Response: 220 Crossroads FTP Command: USER yamikuronue Response: 331 Password required for yamikuronue Command: PASS ******* Response: 230 Anonymous access granted, restrictions apply Command: OPTS UTF8 ON Response: 200 UTF8 set to on Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is the current directory Command: TYPE I Response: 200 Type set to I Command: PASV Response: 227 Entering Passive Mode (64,95,64,197,101,88). Command: LIST Error: Connection timed out Error: Failed to retrieve directory listing In both cases, the log file ends at "PORT" or "PASV" - there's no record of ever receiving a "LIST" command. Just above that I can see the attempt to connect actively via the internal IP, which does indeed include a LIST command. My config file includes "PassivePorts 20001-26999", which are the port forwards I set up for the ftp server, and "Port 8085", which is also forwarded to the same machine. I also have a MasqueradeAddress set up to prevent it from reporting its internal IP, which was an earlier issue I had. I think what I'm asking is, is there another setting someplace I have to change to get this setup to work?

    Read the article

  • Troubleshooting PHP email sending?

    - by darkAsPitch
    I created a website that occasionally emails users when they register/change their password/etc. Every other person however cannot or does not receive the emails. They are telling me that they are not even hitting their spam folders. I don't know a ton about MX records or email sending, but when I "Edit DNS Zone" for this domain in particular there is 1 MX record listed there. How do you go about troubleshooting botched PHP mail actions? UPDATE: Here is my super-simple php mailing code: $subject = "Subject Here"; $message = "Emails Message"; $to = $verified_user_data["email_address"]; $headers = "From: [email protected]\r\n" . "Reply-To: [email protected]\r\n" . "X-Mailer: PHP/" . phpversion(); //returns true on success, false on failure $email_result = mail($to, $subject, $message, $headers); re: "are you saying that some do and some do not?" @ Jacob Yes, basically. I send the emails containing the user's login username/password using similar code above. And I sell to fairly tech-savvy people. About 50% of the time, my customers claim they cannot find their welcome emails in their inbox OR in their spam box. It's as if it never arrived. I have the largest problem with Yahoo email addresses accepting my emails or so it seems. re: "The MX record at your end doesn't factor in, although the SPF record (or lack of it) will. How much access and control do you have on the server itself?" @ John Gardeniers I rent a dedicated server from Codero. Running CentOS 5, WHM + cPanel. I have full root access to the entire thing. Don't know much about MX records and/or SPF records. I just want the PHP mail function to work. It doesn't say much about that on the php mail function's help page. re: "What are you using for the SMTP server?" @ JonLim No idea. I use the code above when I need to fire off an email to a loyal customer, and that's it. Do I need to be worrying about SMTP servers? re: "Could be many, many things. Can you describe how you're sending mail in your code? i.e. are you relaying off of another mail server somewhere, using the local sendmail or postfix? Any consistency in domains that can/cannot receive email? Do you have a PTR record setup from the IP address that you're sending mail out as? What about SPF records?" @ gravyface I just described my simple code above! I believe I have been having the most trouble with Yahoo domains, however "independent" domains (probably running spamassasin) ex. [email protected] as opposed to [email protected] seem to give a lot of trouble as well. I do not know if I have a PTR record setup from the IP address I'm sending my mail from. It's probably the same IP address that I setup my domain on, because I didn't do anything extra special. No idea about SPF records either, where can I go to create one? Side Note: It's a crying shame what havoc the spammers have brought upon our beloved email system.

    Read the article

  • Replicate between mysql 5.0.xx community and enterprise edition over ssh

    - by Arlukin
    I'm trying to setup a mysql replication over an SSH tunnel. The odd thing about this setup is that I have one master with mysql 5.0.60sp1-enterprise-gpl-log and one slave with mysql 5.0.67-community-log. Could it be so that it's not possible to replicate between community and enterprise edition? As you can see in my log below, it's possible to login on the remote server with the mysql client. But the replication get "Can't connect to MySQL server on '127.0.0.1' (13)" Is it any log file I have forgotten to look in, to get more info? [root@mysql1-av ~]# mysql -uroot -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 73 Server version: 5.0.67-community-log MySQL Community Edition (GPL) Type 'help;' or '\h' for help. Type '\c' to clear the buffer. The version of the slave mysql [root@mysql1-av ~]# autossh -f -M 20001 -L 3307:10.200.200.200:3306 [email protected] -N [root@mysql1-av ~]# mysql -h127.0.0.1 --port 3307 -uroot -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 5189 Server version: 5.0.60sp1-enterprise-gpl-log MySQL Enterprise Server (GPL) Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> Aborted Login to the master mysql with the mysql client over the ssh tunnel. [root@mysql1-av ~]# mysql -uroot -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 75 Server version: 5.0.67-community-log MySQL Community Edition (GPL) Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> change master to master_host='127.0.0.1', MASTER_PORT=3307, master_user='xxxx', master_password='xxxx', master_log_file='bin.000001'; Query OK, 0 rows affected (0.00 sec) mysql> start slave; Query OK, 0 rows affected (0.00 sec) mysql> show slave status \G *************************** 1. row *************************** Slave_IO_State: Connecting to master Master_Host: 127.0.0.1 Master_User: replNSG Master_Port: 3307 Connect_Retry: 60 Master_Log_File: bin.000001 Read_Master_Log_Pos: 4 Relay_Log_File: relay.000001 Relay_Log_Pos: 98 Relay_Master_Log_File: bin.000001 Slave_IO_Running: No Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 4 Relay_Log_Space: 98 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL 1 row in set (0.00 sec) Start the replication, but it breaks on IO. [root@mysql1-av ~]# tail /var/log/mysqld.log 120921 22:17:59 [Note] Slave I/O thread killed while connecting to master 120921 22:17:59 [Note] Slave I/O thread exiting, read up to log 'bin.000001', position 4 120921 22:17:59 [Note] Error reading relay log event: slave SQL thread was killed 120921 22:29:36 [Note] Slave SQL thread initialized, starting replication in log 'bin.000001' at position 4, relay log '/var/lib/mysql/relay.000001' position: 4 120921 22:29:36 [ERROR] Slave I/O thread: error connecting to master '[email protected]:3307': Error: 'Can't connect to MySQL server on '127.0.0.1' (13)' errno: 2003 retry-time: 60 retries: 86400 Because it can't connect to the master server.

    Read the article

  • Coda 2 and SCP uploading files with the wrong permission

    - by Tom Black
    Currently I have a basic Ubuntu server running a website. The website is for a few students learning HTML/PHP and each student has their own account with a symbolic link to the shared website folder. Since the students are working on the website together, each user needs to be able to modify all the files (index.html for example). So I created a Webdev group containing all of the students with the default umask of 0002 set in their .bashrc (This allows newly created files to be 774). The shared folder is owned by the group Webdev with a chmod g+s so that new files/folders also belong to the group Webdev. The problem is that the students are using an IDE (Coda 2) and when they create a new file or folder using the IDE the file has the permissions of 644 on the server (not group writable). However when I make a new file through connecting with Cyberduck (SFTP client) the file permissions are 664 (as they should be). So I don't understand why Coda would be any different. However, after some trial and error I believe that Coda is first creating the file on local disk and then uploading that file to the server. On a mac by default a newly created file is 644. When the client uploads a file that's already 644 it stays 644 on the server side (umask is kind of useless in this situation). I've also tried creating ACL permissions for that folder but an uploaded file from my mac via SCP doesn't get the default ACL permissions. In Coda there is an option to change file permissions on a transfer. However this option seems to apply a chmod to all files being uploaded or saved. When one of students is modifying a file created by someone else when they try to upload the file or save it Coda tries to also do a chmod but fails because that user isn't the owner of the file. My current solution is using bindfs... I mount the shared web folder and bindfs sets permissions and group ownership of newly created files. However, bindfs seems to be a bit slow and I'm sure there is a better solution. Even if the students ditched Coda 2 and used Mac vim with scp the newly created files on the server would behave the same (644) which is default on the mac. Other options... 1) Either I teach the students to use (ssh/chmod) with their IDE to change their own file permissions when uploading. 2) I make all the students' Macs have the default umask of 0002 which would upload files with the right permissions. 3) Write a corn script to fix the file permissions every 5 to 15 minutes... (This option I think is the worst if students are working together at the same time). Is there any way that I could make all files that are uploaded via SCP have the default file permissions of 664 even though the uploaded file has a lower permission? (After hours of searching I don't think this is possible) I guess a corn script is my best option for novice users. How do web developers work together on larger sites? similar to this: http://serverfault.com/questions/283492/how-to-specify-file-permission-when-putting-a-file-using-openssh-sftp-command Also similar: http://serverfault.com/questions/395418/managing-linux-directory-permissions-sftp

    Read the article

< Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >