Search Results

Search found 47324 results on 1893 pages for 'end users'.

Page 398/1893 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • How do I increase the open files limit for a non-root user?

    - by iCode
    This is happening on Ubuntu Release 12.04 (precise) 64-bit Kernel Linux 3.2.0-25-virtual I'm trying to increase the number of open files allowed for a user. This is for an my ecplise java application where the current limit of 1024 is not enough. According to the posts I've found so far, I should be able to put lines into /etc/security/limits.conf like this; soft nofile 4096 hard nofile 4096 to increase the number of open files allowed for all users. But, that's not working for me, and I think the problem is not related to that file. For all users, the default limit is 1024, regardless of what is in /etc/security/limits.conf (I have been rebooting after changing that file) $ ulimit -n 1024 Now, despite the entries in /etc/security/limits.conf I can't increase that; $ ulimit -n 2048 -bash: ulimit: open files: cannot modify limit: Operation not permitted The weird part is that I can change the limit downwards, but can't change it upwards - even to go back to a number which is below the original limit; $ ulimit -n 800 $ ulimit -n 800 $ ulimit -n 900 -bash: ulimit: open files: cannot modify limit: Operation not permitted As root, I can change that limit to whatever I want, up or down. It doesn't even seem to care about the supposedly system-wide limit in /proc/sys/fs/file-max # cat /proc/sys/fs/file-max 188897 # ulimit -n 188898 # ulimit -n 188898 So far, I haven't found any way to increase the open files limit for a non-root user, and I really don't want to be running my application as root. How should I properly do this? I have looked at all the posted and tried the given options but no luck!

    Read the article

  • Video on demand streaming solution

    - by Rafal Saltarski
    we are looking into building a commercial vod service and I'm doing research on the subject. We need the video to be protected with DRM so users cant rip/copy (i realise its mostly useless but the lawyers demand it from us) and the server solution to be scalable and able to distrubute content in hd to a high number of clients. Besides, we would need to be able to develop a player integrated with the payment system, so users that havent bought access would for example only see the first 30 seconds of video and then have the playback interrupted, or only be able to watch video for 72h after payment. Also it would be nice if we could distribute video to mobile platforms like android/wp8/ios but thats not a priority at the moment. I have zero experience on that topic so I would greatly appreciate any feedback or giving me stuff to read about like protocols or key phrases i should know. Thanks

    Read the article

  • How can I get a virus by just visiting a website?

    - by Janet Jacobs
    It is common knowledge that you can get a virus just by visiting a website. But how is this possible? Do these viruses attack Windows, Mac and Linux users, or are Mac/Linux users immune? I understand that I obviously can get a virus by downloading and executing a .exe in Windows but how can I get a virus just by accessing a website? Are the viruses programmed in JavaScript? (It would make sense since it is a programming language that runs locally.) If so, what JavaScript functions are the ones commonly used?

    Read the article

  • IIS 7 throws 401 responses on application whose physical directory has been shared

    - by tonyellard
    I have an IIS 7, Windows Server 2008 R2 box with a relatively fresh install. I've deployed a .NET 2.0 application using windows autentication to the server, and from the default website, added it as an application. I updated the IIS authentication to enable Windows Authentication. When I went to share out the physical directory for the application so that a developer could deploy updates, the users began to receive 401 errors. I can reliably recreate the issue by sharing out the directory of any newly created application. The IIS user has the necessary read/write access to the directory. What do I need to do to keep web users from receiving 401's while at the same time allowing this developer to have access to the physical directory for deployments? Thanks in advance!

    Read the article

  • Didmo did mo' to advance Java ME technology than other companies

    - by hinkmond
    Here's a company that's keeping Java ME tech real in the field. DIDMO is the creator of Magmito, a user-generated mobile content creation service. That's a good thing to have when there are so many mobile platforms out there to choose from. See: Didmo does mo' Here's a quote: DIDMO's mission is to deliver the market leading mobile application generator. We will achieve this by meeting the growing market demand for a true end-to-end solution for easy mobile content creation and universal delivery. Our software offering will incorporate an award- winning toolset with universal reach (from Java to [that other platform]), Make an app today! Just make sure it's a Java ME app... Hinkmond

    Read the article

  • Handling & processing credit card payments

    - by Bob Jansen
    I'm working on program that charges customers on a pay as you go per month modal. This means that instead of the customers paying their invoices at the start of the month, they will have to pay at the end of the month. In order to secure the payments I want my customers credit card information stored so that they can be charged automatically at the end of the month. I do not have the resources, time, or risk to handle and store my customers credit card information on my servers and am looking for a third party solution. I'm a tad overwhelmed by all the different options and services that are out there and was wondering if anyone with experience have any recommendations and tips. I'm having difficulty finding services that allow me to to store my customers credit card information and charge them automatically. Most of them seem to offer an invoice styled approach.

    Read the article

  • ffmpeg: cut multiple input files with seeking to one output file

    - by Josef Kufner
    I have list of video files (loaded from database), each with start and end time of requested interval: # file begin end v1.mp4 1:01 2:01 v2.mp4 3:02 3:32 v3.mp4 2:03 5:23 And I need to create single video file containing these intervals: [0:00]---v1---[2:00]---v2---[2:30]---v3---[5:50] I preffer usig ffmpeg, since it is installed on server. Caller program is written in PHP. It is easy to cut one input to one output (argument escaping removed for clarity): exec("ffmpeg -ss $begin -i $input_file -ss $begin -c copy $output_file"); I there any easier way than executing ffmpeg for each interval and then execute it once more to concatenate prepared clips together? I really do not like to have a lot of temporary files or dealing with complex process handling.

    Read the article

  • Write to windows share mounted in Ubuntu

    - by aidan
    I used to mount a windows share in Ubuntu server, with an entry in fstab: //data/SharedFolder /media/SharedFolder/ smbfs user,defaults,credentials=/root/.creds,uid=root,gid=root 0 0 /root/.creds is a text file with three lines, my username, password and domain. Users on the ubuntu server could write to this mount, but then I upgraded to 10.04 and now only root can write. Regular users can still read though. mount currently tells me: //data/SharedFolder on /media/SharedFolder type cifs (rw,mand,noexec,nosuid,nodev) How do I make it world writeable again? Thanks

    Read the article

  • Notification if SyncToy fails

    - by Joel Coehoorn
    I support a number of laptop users. In the past (before there were many laptops), each user's computer was set up so that their My Documents folder was mapped to a shared folder on the server. This worked very well for desktops, but has several obvious downsides for laptops (no files when you're off-site, etc). I'm exploring several alternatives for laptops to better map the shared drives, and SyncToy seems the best so far. I have a couple trial users set up so that it syncs automatically whenever they log in, along with a desktop icon they can click if they know they'll need something saved before the next login. My problem is that I'm concerned how I, as the maintainer of this system, can spot failures. I don't want my first indication of a problem to come after a user drops their laptop in a lake and it turns out nothing was synced for the last year. Any ideas?

    Read the article

  • isolate web servers on intranet with dfl-800

    - by microchasm
    I administer a small network (10 users). I'm getting ready to deploy a internal webapp that will be hosted and accessed locally only. There are about 10 users on the network (192.168.111.0/24), a win2k3 server, apache (RHEL), and Mysql (RHEL), and various miscellaneous peripheries. I'd like to isolate the apache and sql boxes into a seperate area of the lan to keep things easier to maintain/grow. I've been reading about vlans, subnets, etc.. I'm not clear, however, which would be the best solution for our setup. Thanks for any tips and or advice.

    Read the article

  • Impacting the Future through Collaboration at Alliance 14

    - by Jeb Dasteel-Oracle
    We’re hearing good things about the Alliance 14 conference held in Las Vegas by the Higher Education Users Group (HEUG) back in March. For those of you who aren’t familiar with Alliance 14 conferences, they are global events dedicated to enhancing and educating its members and the world on how higher educational institutions can utilize Oracle applications to change how they do business. The HEUG is an all-volunteer organization made up of individuals who collaborate with Oracle as part of the evolving higher education industry. Conference participants network with peers from other institutions (regionally and globally) to share the challenges; discuss solutions and ideas, and collaborate on HEUG strategic initiatives. The HEUG enables each institution to be a part of the ever-changing Oracle landscape. Watch the video below and hear directly from the attendees about their experience with Oracle and how being part of the HEUG has allowed them to  collaborate with one of their most importance resources... and with each other. Oracle is committed to fostering a strong and independent network of user groups worldwide. Currently over 900+ groups provide dynamic forums for customers to share information, experiences and expertise. If you’re interested in more information or joining an Oracle User Group, click and become part of a vibrant network of engaged users finding the best ways to get the most value from their Oracle investment and collaborating to provide a unified feedback voice to Oracle. Catch you next time, Jeb

    Read the article

  • open source knowledge base CMS system

    - by Thomi
    I'm looking for an open source knowledge base system that uses tags, rather than free-text search to identify articles (a lot like serverfault does). I've looked at twiki, which many people suggested, but haven't found what I'm looking for. Basically I want to be able to create and tag articles, and provide an easy way for anonymous users to search based on tags. Edit: OK, here's some more detail regarding what I want. Basically, all the knowledge base systems I have seen so far are a collection of articles, each article with a title. Most of them allow you to categorise articles into groups and sub-groups. Users of the system can search for information using a title search, for example "How do I print from AwesomeProduct?" - which then shows a list of any articles that match that search text. This is fine and dandy when your KB is for one version of the software product (the mythical AwesomeProduct ver 1.0). However, the development team then go ahead and create a new version (ver 2.0) that adds many new features and changes some existing features. Now, how do we support both products in the same KB? The Naive method is to copy all articles from 1.0, and update them for 2.0, adding and removing articles in 2.0 as required. We can then add text at the top of every 1.0 article that says: "this articles applies to 1.0 only, to see the 2.0 version, click here" (or something similar) The problem with articles being indexed in the system by title is that it's very hard to filter based on meta-data like version. What happens when we create version 3.0 or 4.0? The end-situation here is that you have a mess of articles. They're hard to search, hard to filter, and even harder to manage. The solution (it seems to me) is to use tags, rather than text as the article index mechanism. So articles can be tagged with a tag representing the software version, topic area etc. etc. Users can then filter based on tag - an example search might be "version_1 printing" - which straight away gives a list of articles with all these tags. So that's what I'm looking for - a KB system that uses tags, rather than text to index many articles. I'm sure I could build something with drupal, but I was hoping for something that worked out-of-the-box.

    Read the article

  • How can I restrict a group to reading only two particular folders with Windows Server?

    - by Lord Torgamus
    I have a group of users on Windows Server 2003 who need to be able to read the contents of two directories but not be able to access anything else on the server (including read-only access). One of the directories is K:\projectFour\config — and the other is similarly formatted — so it would be okay for group members to be able to list the contents of K:\ and K:\projectFour\ but not actually read anything in those directories. I've found several resources via SF/Google, including how to restrict individual folders/drives and how to allow users to only run specific executables, but that information ultimately didn't solve my issue. Sorry if this is a really simple thing to do, I'm usually a developer and don't know the first thing about servers or group policies. Finally, I should mention that this isn't a fully concrete question, as it will be implemented eventually but I don't personally have a copy of Windows Server 2003 to test with right now.

    Read the article

  • Unified Communications Suite Ships New Version

    - by joesciallo
    We shipped the latest version (7.0.5.0.0) of Unified Communications Suite. The following information should get you started: Get the Software New Features Release Notes Some Changes for 7.0.5.0.0 Convergence: Version 3.0.0.0.0 enables you to use the add-on framework to add third-party services to the Convergence UI. These services include: Advertising Click-to-call service Multinetwork IM SMS (both one-way and two-way) Social media applications (Facebook, Twitter, and Flickr) Video and voice calling capability For more information, see Overview of Add-on Services in Convergence. Calendar Server: Version 7.0.4.14.0 provides a number of security enhancements, including supporting the SSL protocol for all front-end and back-end communications, and the ability to list hosts that are allowed to send iSchedule POST requests. For more information, see Securing Communications to Calendar Server Back Ends. New Platform Support: Oracle GlassFish Server 3, Oracle Solaris 11, and Oracle Enterprise Linux 6.x are supported in this release of Communications Suite.

    Read the article

  • Forms Authentication across Sub-Domains on local IIS

    - by Parminder
    I asked this question at SO http://stackoverflow.com/questions/8278015/forms-nauthentication-across-sub-domains-on-local-iis Now asking it here. I know a cookie can be shared across multiple subdomains using the setting <forms name=".ASPXAUTH" loginUrl="Login/" protection="Validation" timeout="120" path="/" domain=".mydomain.com"/> in Web.config. But how to replicate same thing on local machine. I am using windows 7 and IIS 7 on my laptop. So I have sites localhost.users/ for my actual site users.mysite.com localhost.host/ for host.mysite.com and similar.

    Read the article

  • Nagios: Which services should I monitor on different roles of servers?

    - by Itai Ganot
    I've started working in a new workplace and my first task is to build a Nagios server and configure it to monitor the servers in the network. Since I'm starting from scratch I wanted to hear from you, experienced users, which checks should I configure for each role? For example, there are some basic checks which I run on each Linux machine I monitor: SSH, Ping, Load, Current Users, Swap Usage, etc... Now my question is, which specific checks should I run for a DataBase server, for a Networking Switch, for Httpd servers? (I currently monitor how many httpd processes are running) Thanks in advance

    Read the article

  • mRemote and RoyalTS both are not able to RDP connect me to a Windows Server 2008 system?

    - by djangofan
    I am completely stuck on this one. If I start a RDP session independently of these 2 programs everything works fine. My RDP session connects, I click "OK" to accept the "Notice To Users" security message and then it shows me the login screen where I enter my password manually. Now, if I try to use either mRemote or RoyalTS to create this connection, I get the same behavior except that I get a "The user name or password is incorrect." message. Now, I know this cannot be true since I can manually connect with RDP. So, what is the problem with these 2 pieces of software that prevents me from logging in? I have no problems with connecting to Windows XP systems with these programs. Additionally, I wish I knew how to get one of these programs to automatically click the "OK" button on the "Notice To Users" message while automatically attempting to log me in as part of the login process. Can they do that?

    Read the article

  • Backups of Exchange 2007 SP3 using VSS are abnormally large

    - by Stew
    I have recently implemented Veeam backup and recovery 6.0, and have noted when backing up my exchange server via incremental updates, it is transferring way more data than expected. Backup is incremental, and setup to use VSS. VSS is stable and healthy, according to vssadmin. Exchange 2007 SP3 running on Windows Server 2008 R2, just last weekend I installed the latest Rollup for Exchange. I thought the nightly incrementals were large, but perhaps my users really are sending that much mail so I tested taking one incremental backup, waiting 10 minutes and taking a second. The second incremental backup transfered 5.8GB of data. We as an organization are absolutely NOT putting 5.8GB of data on the mail server every 10 minutes. Are there any other veeam users who have seen something similar? Is my test faulted? Are there other considerations for VSS?

    Read the article

  • ADF Business Components

    - by Arda Eralp
    ADF Business Components and JDeveloper simplify the development, delivery, and customization of business applications for the Java EE platform. With ADF Business Components, developers aren't required to write the application infrastructure code required by the typical Java EE application to: Connect to the database Retrieve data Lock database records Manage transactions   ADF Business Components addresses these tasks through its library of reusable software components and through the supporting design time facilities in JDeveloper. Most importantly, developers save time using ADF Business Components since the JDeveloper design time makes typical development tasks entirely declarative. In particular, JDeveloper supports declarative development with ADF Business Components to: Author and test business logic in components which automatically integrate with databases Reuse business logic through multiple SQL-based views of data, supporting different application tasks Access and update the views from browser, desktop, mobile, and web service clients Customize application functionality in layers without requiring modification of the delivered application The goal of ADF Business Components is to make the business services developer more productive.   ADF Business Components provides a foundation of Java classes that allow your business-tier application components to leverage the functionality provided in the following areas: Simplifying Data Access Design a data model for client displays, including only necessary data Include master-detail hierarchies of any complexity as part of the data model Implement end-user Query-by-Example data filtering without code Automatically coordinate data model changes with business services layer Automatically validate and save any changes to the database   Enforcing Business Domain Validation and Business Logic Declaratively enforce required fields, primary key uniqueness, data precision-scale, and foreign key references Easily capture and enforce both simple and complex business rules, programmatically or declaratively, with multilevel validation support Navigate relationships between business domain objects and enforce constraints related to compound components   Supporting Sophisticated UIs with Multipage Units of Work Automatically reflect changes made by business service application logic in the user interface Retrieve reference information from related tables, and automatically maintain the information when the user changes foreign-key values Simplify multistep web-based business transactions with automatic web-tier state management Handle images, video, sound, and documents without having to use code Synchronize pending data changes across multiple views of data Consistently apply prompts, tooltips, format masks, and error messages in any application Define custom metadata for any business components to support metadata-driven user interface or application functionality Add dynamic attributes at runtime to simplify per-row state management   Implementing High-Performance Service-Oriented Architecture Support highly functional web service interfaces for business integration without writing code Enforce best-practice interface-based programming style Simplify application security with automatic JAAS integration and audit maintenance "Write once, run anywhere": use the same business service as plain Java class, EJB session bean, or web service   Streamlining Application Customization Extend component functionality after delivery without modifying source code Globally substitute delivered components with extended ones without modifying the application   ADF Business Components implements the business service through the following set of cooperating components: Entity object An entity object represents a row in a database table and simplifies modifying its data by handling all data manipulation language (DML) operations for you. These are basically your 1 to 1 representation of a database table. Each table in the database will have 1 and only 1 EO. The EO contains the mapping between columns and attributes. EO's also contain the business logic and validation. These are you core data services. They are responsible for updating, inserting and deleting records. The Attributes tab displays the actual mapping between attributes and columns, the mapping has following fields: Name : contains the name of the attribute we expose in our data model. Type : defines the data type of the attribute in our application. Column : specifies the column to which we want to map the attribute with Column Type : contains the type of the column in the database   View object A view object represents a SQL query. You use the full power of the familiar SQL language to join, filter, sort, and aggregate data into exactly the shape required by the end-user task. The attributes in the View Objects are actually coming from the Entity Object. In the end the VO will generate a query but you basically build a VO by selecting which EO need to participate in the VO and which attributes of those EO you want to use. That's why you have the Entity Usage column so you can see the relation between VO and EO. In the query tab you can clearly see the query that will be generated for the VO. At this stage we don't need it and just use it for information purpose. In later stages we might use it. Application module An application module is the controller of your data layer. It is responsible for keeping hold of the transaction. It exposes the data model to the view layer. You expose the VO's through the Application Module. This is the abstraction of your data layer which you want to show to the outside word.It defines an updatable data model and top-level procedures and functions (called service methods) related to a logical unit of work related to an end-user task. While the base components handle all the common cases through built-in behavior, customization is always possible and the default behavior provided by the base components can be easily overridden or augmented. When you create EO's, a foreign key will be translated into an association in our model. It defines the type of relation and who is the master and child as well as how the visibility of the association looks like. A similar concept exists to identify relations between view objects. These are called view links. These are almost identical as association except that a view link is based upon attributes defined in the view object. It can also be based upon an association. Here's a short summary: Entity Objects: representations of tables Association: Relations between EO's. Representations of foreign keys View Objects: Logical model View Links: Relationships between view objects Application Model: interface to your application  

    Read the article

  • Lenovo Thinkpad X1 Carbon support

    - by Robottinosino
    I am considering selling my Mac to get money towards a Lenovo Thinkpad X1 because what I really want to do is to be running an Ubuntu system all the time. Is this machine completely supported in Ubuntu, with no tiny little feature missing just because I am "going Linux"? Optional user story section, skip to the question below if you don't have time: I have a friend who bought a "works on Ubuntu" system a year ago and has hated the fact ever since: battery lasts less than if he boots in Windows (which he despises) and he ascribes that to "no good OS/harware integration and support for advanced chipset power management features", odd behaviour on suspend/resume/hibernate (says: "when it works 90% of the time and the other 10% it makes you lose your work is as good as broken - 90% is the same as 0% he says), some occasional graphics card glitches he can perfectly well live with and has almost grown affectionate to, and finally, and that is what would make him undo his choice if he could, bad "input device drivers". He says: trackpoint and trackpad just "feel different", "so much better" on Windows and that was impossible to know from the website brochure. That story makes me very doubtful... but I want to abandon this "walled garden" of prison that is my Mac and go Ubuntu all the way, no doubt about that! My dilemma at this time is just: "I don't want to live with those eternal frustrations for sure"! Here's a directly answerable phrasing of my question: Is the Lenovo Thinkpad X1 supported on Ubuntu? Yes/no, which version? Which hardware features are not supported? Provide a list Optionally: sort the list in descending order of frustration from your experience Optionally: mention if there are acceptable workarounds to the "out-of-the-box" condition described in the earlier points and whether this ameliorates frustration at least to "tolerable" levels Comment: the Ubuntu hardware certification page is so not-for-end-users it's unreal. Whoa. What would make it end-user friendly is: Link to "buy here and you'll be just fine, this is the right configuration for you, it'll work as long as you press BUY on that page and don't browse further" Remove mentions of may and might not work. Just tell it straight: press buy here and you will get a working system with the exception of A, B, C (so that I can decide whether the philosophical "freedom pleasure" I get from escaping an Apple world is enough to off-balance the loss, for instance, of Bluetooth capabilities (something that I of course use on my Mac) but "could" lose to use free (as in freedom) software The certification page fails to dispel doubts in me as an end-user. I don't feel "eased into Ubuntu", I feel "partially informed".

    Read the article

  • 1 Zettabyte Is Equal to 1 Million Petabytes

    - by Gopinath
    Geeks recently coined a new English term, Zettabyte, to measure the rapidly growing digital information footprint. So what is a Zettabyte? A Zettabyte is equal to 1 million Petabytes or 1 trillion terabytes and 1 quadrillion gigabytes. Symbol To Represent Zettabyte According to wikipedia the symbol ZB is used to represent a Zettabyte. So we can write 10 ZB to represent 10 Zettabytes. Human’s Digital Output Will be 1.2 Zettabytes By End Of 2010 Are you wondering why do we need a term to measure digital data? Tech research firm IDC has recently published a report that estimates current digital footprint created by us so far at 8 million Petabytes – the equivalent of 8,000,000,000,000 GB. This foot print is expected to pass 1.2 Zettabytes by the end of 2010. cc image credit:flickr Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • SSL Certificates, two-way authentication and loadbalancers

    - by 5arx
    We're looking to implement two-way authentication with client certificates for a privileged subset of our application users. The idea will be that if a certificate is detected the user will be asked for an additional password/PIN and that will be used to verify the certificate and user. Ordinary users will continue to authenticate themselves via the standard login mechanism. Our production environment (hosted by a well-known company) comprises load-balanced application servers and I'm unclear as to how this set-up will handle the certificates and I'm not certain if there are any pitfalls I should be aware of. I would very appreciate some thoughts, comments or real-world advice on the subject.

    Read the article

  • Keep source IP after NAT

    - by John Miller
    Until today I used a cheapy router so I can share my internet connection and keep a webserver online too, while using NAT. Users IP ($_SERVER['REMOTE_ADDR']) was fine, I was seeing class A IPs of users. But as traffic grown up everyday, I had to install a Linux Server (Debian) to share my Internet Connection, because my old router couldn't keep the traffic anymore. I shared the internet via IPTABLES using NAT, but now, after forwarding port 80 to my webserver, now instead of seeing real users IP, I see my Gateway IP (Linux Internal IP) as any user IP Address. How to solve this issue? I edited my post, so I can paste the rules I'm currently using. #!/bin/sh #I made a script to set the rules #I flush everything here. iptables --flush iptables --table nat --flush iptables --delete-chain iptables --table nat --delete-chain iptables -F iptables -X # I drop everything as a general rule, but this is disabled under testing # iptables -P INPUT DROP # iptables -P OUTPUT DROP # these are the loopback rules iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -o lo -j ACCEPT # here I set the SSH port rules, so I can connect to my server iptables -A INPUT -p tcp --sport 513:65535 --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --sport 22 --dport 513:65535 -m state --state ESTABLISHED -j ACCEPT # These are the forwards for 80 port iptables -t nat -A PREROUTING -p tcp -s 0/0 -d xx.xx.xx.xx --dport 80 -j DNAT --to 192.168.42.3:80 iptables -t nat -A POSTROUTING -o eth0 -d xx.xx.xx.xx -j SNAT --to-source 192.168.42.3 iptables -A FORWARD -p tcp -s 192.168.42.3 --sport 80 -j ACCEPT # These are the forwards for bind/dns iptables -t nat -A PREROUTING -p udp -s 0/0 -d xx.xx.xx.xx --dport 53 -j DNAT --to 192.168.42.3:53 iptables -t nat -A POSTROUTING -o eth0 -d xx.xx.xx.xx -j SNAT --to-source 192.168.42.3 iptables -A FORWARD -p udp -s 192.168.42.3 --sport 53 -j ACCEPT # And these are the rules so I can share my internet connection iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE iptables -A FORWARD -i eth0:1 -j ACCEPT If I delete the MASQUERADE part, I see my real IP while echoing it with PHP, but I don't have internet. How to do, to have internet and see my real IP while ports are forwarded too? ** xx.xx.xx.xx - is my public IP. I hid it for security reasons.

    Read the article

  • So, I though I wanted to learn frontend/web development and break out of my comfort zone...

    - by ripper234
    I've been a backend developer for a long time, and I really swim in that field. C++/C#/Java, databases, NoSql, caching - I feel very much at ease around these platforms/concepts. In the past few years, I started to taste end-to-end web programming, and recently I decided to take a job offer in a front end team developing a large, complex product. I wanted to break out of my comfort zone and become more of an "all around developer". Problem is, I'm getting more and more convinced I don't like it. Things I like about backend programming, and missing in frontend stuff: More interesting problems - When I compare designing a server that handle massive data, to adding another form to a page or changing the validation logic, I find the former a lot more interesting. Refactoring refactoring refactoring - I am addicted to Visual Studio with Resharper, or IntelliJ. I feel very comfortable writing code as it goes without investing too much thought, because I know that with a few clicks I can refactor it into beautiful code. To my knowledge, this doesn't exist at all in javascript. Intellisense and navigation - I hate looking at a bunch of JS code without instantly being able to know what it does. In VS/IntelliJ I can summon the documentation, navigate to the code, climb up inheritance hiererchies ... life is sweet. Auto-completion - Just hit Ctrl-Space on an object to see what you can do with it. Easier to test - With almost any backend feature, I can use TDD to capture the requirements, see a bunch of failing tests, then implement, knowing that if the tests pass I did my job well. With frontend, while tests can help a bit, I find that most of the testing is still manual - fire up that browser and verify the site didn't break. I miss that feeling of "A green CI means everything is well with the world." Now, I've only seriously practiced frontend development for about two months now, so this might seem premature ... but I'm getting a nagging feeling that I should abandon this quest and return to my comfort zone, because, well, it's so comfy and fun. Another point worth mentioning in this context is that while I am learning some frontend tools, a lot of what I'm learning is our company's specific infrastructure, which I'm not sure will be very useful later on in my career. Any suggestions or tips? Do you think I should give frontend programming "a proper chance" of at least six to twelve months before calling it quits? Could all my pains be growing pains, and will they magically disappear as I get more experienced? Or is gaining this perspective is valuable enough, even if plan to do more "backend stuff" later on, that it's worth grinding my teeth and continuing with my learning?

    Read the article

  • Files not accessible

    - by gokul
    My system is running on a pc with C:\ Drive out of space. So I tried to delete some file and clean up to get more space. I found that the %Temp% {C:\Users\Username\AppData\Local\Temp} takes lots of space and tried to delete files in it. But when I open it , it alerted me with the message C:\Users\Username\AppData\Local\Temp is not accessible The file or directory is corrupted and unreadable? What to do? Is deleting files from Temp harmful to computer?

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >