Search Results

Search found 343 results on 14 pages for 'nix'.

Page 13/14 | < Previous Page | 9 10 11 12 13 14  | Next Page >

  • Single application through OpenVPN tunnel (Debian Lenny)

    - by user14124
    I'm using Debian Lenny and I want to tunnel rtorrent only through a OpenVPN tunnel. I have a tunnel running, the config file looks like this: client dev tun proto udp remote openvpn.xxx.com 1194 resolv-retry infinite nobind persist-key persist-tun ca /etc/openvpn/xxx/keys/ca.crt cert /etc/openvpn/xxx/keys/client.crt key /etc/openvpn/xxx/keys/client.key tls-auth /etc/openvpn/xxx/keys/tls.key 1 ns-cert-type server comp-lzo verb 3 auth-user-pass script-security 3 reneg-sec 0 My idea is that I could run a sockd proxy internally that redirects traffic to the openvpn tunnel. I could use the *nix "proxifier" application "tsocks" to make it possible for rtorrent to connect through that proxy (as rtorrent doesn't support proxies). I have trouble configuring sockd as my IP inside the VPN changes every time I connect. This is a config file someone said would help: http://ircpimps.org/sockd.conf As my IP changes at each connect I don't know what to put in that config file. I have no control over the host side config file. Any help wanted. Any other method is very welcome.

    Read the article

  • Walk me through the Linux log files (please)

    - by Andy
    Hey all, I just tried loading a 2MB file in gedit and it silently died on me. I was wondering if anything might appear in a log file that might help me diagnose this: I checked syslog and found out it segfaulted. While doing this I realised that I don't really know anything about how logging is organised on *nix machines. All I know at the mo is Logs are typically stored in /var/log/... is there anywhere else that I should know about? I'm familiar with application specific logs, such as apache's. I understand that dmesg is the bootup log, and syslog is a general system log... is that right? So would someone mind taking me through the most useful logs? Are the two logs I mention in the final point the only general logs? And what are the funky numbers at the start of lines in dmesg? Seconds since startup? Please include anything in your answers that you think would improve my understanding here and help me track down anomalies! TIA Andy

    Read the article

  • How to control/check CheckPoint rules changes (and another System events)

    - by user35115
    I need to check/control all system events on many CheckPoint FW1 - don't misunderstand - not rules triggering, but events such admins log on, rules changes and etc. I found out that I can make an log export using 2 methods: Grab logs Use special script that redirect Checkpoint log entries to syslog, FW1-Loggrabber But it's not clear for me does such logs also contain information that i need (admins log on, rules changes)? And If yes is it possible to filter events? I also suppose, that if system bases on *nix platform it must be a ploy - use based functions of the system to do what i want. Unfortunately i don't know where to "dig". May be you know? Updated: New info "FW-1 can pipe its logs to syslog via Unix's logger command, and there are third party log-reading utilities" So, the main question is how do my task in the best way? Has anybody already resolved such problem? P.S. I' m new with CheckPoint, so all information will be useful for me. Thank you.

    Read the article

  • How to connect from ruby to MS Sql Server

    - by apetrov
    Hi Crowd! I'm trying to connect to the sql server 2005 database from *NIX machine: I have the following configuration: Linux 64bit ruby -v ruby 1.8.6 (2007-09-24 patchlevel 111) [x86_64-linux] important gems: dbd-odbc (0.2.4) dbi (0.4.1) active record sql server adapter - as plugin ruby-odbc 0.9996 (installed without any options.) unixODBC is installed freeTDS is installed cat /etc/odbcinst.ini [FreeTDS] Description = TDS driver (Sybase/MS SQL) Driver = /usr/lib/libtdsodbc.so Setup = /usr/lib/odbc/libtdsS.so CPTimeout = CPReuse = FileUsage = 1 DSN: DRIVER=FreeTDS;TDS_Version=8.0;SERVER=XXXX;DATABASE=XXX;Port=1433;uid=XXX;pwd=XXXX;" or DRIVER=/usr/lib/libtdsodbc.so;TDS_Version=8.0;SERVER=XXXX;DATABASE=XXX;Port=1433;uid=XXX;pwd=XXXX;" I receive the following error: >>ActiveRecord::Base.sqlserver_connection({"mode"=>"ODBC", "adapter"=>"sqlserver", "dsn"=>my_dns) DBI::DatabaseError: IM002 (0) [unixODBC][Driver Manager]Data source name not found, and no default driver specified from /usr/lib/ruby/1.8/DBD/ODBC/ODBC.rb:95:in `connect' from /usr/lib/ruby/1.8/dbi.rb:424:in `connect' from /usr/lib/ruby/1.8/dbi.rb:215:in `connect' from /opt/ublip/rails/current/vendor/plugins/activerecord-sqlserver-adapter/lib/active_record/connection_adapters/sqlserver_adapter.rb:47:in `sqlserver_connection' It looks like ODBC unable to find appropriate ODBC driver, but I have no ideas why. I had a problem with /usr/lib/libtdsodbc.so which is empty in default debian package free-tds dev, but i solved it with remove broken package and installation from sources. Will appreciate any thought! Thanks & Regards Note: I'm albe to connect using the same steps on mac 10.5

    Read the article

  • Strategies for Accessing a Application with a COM API From PHP

    - by Alan Storm
    Background: Experienced PHP developer with a mostly *nix background. I'm writing a PHP application that needs to interact with a proprietary 3rd party system. The 3rd party system is Windows only. The PHP application will be living on a separate Linux based system The 3rd party application has been described as having a "COM API" that I'll need to talk to from the PHP application. What does this look like architecturally speaking? I'm starting with the COM section of the PHP manual, but I have specific questions. Specific Questions: Can I talk directly to a COM API from a PHP application running on another server? If so, how? (what PHP extensions would I need, or what protocols/PHP functions would I be using to talk to the API) If the answer to number 2 is no, I'd assume I'd need some kind of application on the Windows machine that can talk to COM, and then a service on the windows machine I can hit with PHP. Are there prebuilt frameworks for this kind of thing? Is this all nonsense and/or did I say something exceedingly stupid? (Quite possible, as I'm a little fuzzy on what "COM" does and doesn't cover) I'm obviously not looking for a full solution here, I'm just trying to get a general idea of what is and isn't possible and what kind of things I'll want to Google for. Thanks!

    Read the article

  • Pros & Cons of Google App Engine

    - by Rishi
    Pros & Cons of Google App Engine [An Updated List 21st Aug 09] Help me Compile a List of all the Advantages & Disadvantages of Building an Application on the Google App Engine Pros: 1) No Need to buy Servers or Server Space (no maintenance). 2) Makes solving the problem of scaling much easier. Cons: 1) Locked into Google App Engine ?? 2)Developers have read-only access to the filesystem on App Engine. 3)App Engine can only execute code called from an HTTP request (except for scheduled background tasks). 4)Users may upload arbitrary Python modules, but only if they are pure-Python; C and Pyrex modules are not supported. 5)App Engine limits the maximum rows returned from an entity get to 1000 rows per Datastore call. 6)Java applications may only use a subset (The JRE Class White List) of the classes from the JRE standard edition. 7)Java applications cannot create new threads. Known Issues!! http://code.google.com/p/googleappengine/issues/list Hard limits Apps per developer - 10 Time per request - 30 sec Files per app - 3,000 HTTP response size - 10 MB Datastore item size - 1 MB Application code size - 150 MB Pro or Con? App Engine's infrastructure removes many of the system administration and development challenges of building applications to scale to millions of hits. Google handles deploying code to a cluster, monitoring, failover, and launching application instances as necessary. While other services let users install and configure nearly any *NIX compatible software, App Engine requires developers to use Python or Java as the programming language and a limited set of APIs. Current APIs allow storing and retrieving data from a BigTable non-relational database; making HTTP requests; sending e-mail; manipulating images; and caching. Most existing Web applications can't run on App Engine without modification, because they require a relational database.

    Read the article

  • Activate a python virtual environment using activate_this.py in a fabfile on Windows

    - by Rudy Lattae
    I have a Fabric task that needs to access the settings of my Django project. On Windows, I'm unable to install Fabric into the project's virtualenv (issues with Paramiko + pycrypto deps). However, I am able to install Fabric in my system-wide site-packages, no problem. I have installed Django into the project's virtualenv and I am able to use all the " python manage.py" commands easily when I activate the virtualenv with the "VIRTUALENV\Scripts\activate.bat" script. I have a fabric tasks file (fabfile.py) in my project that provides tasks for setup, test, deploy, etc. Some of the tasks in my fabfile need to access the settings of my django project through "from django.conf import settings". Since the only usable Fabric install I have is in my system-wide site-packages, I need to activate the virtualenv within my fabfile so django becomes available. To do this, I use the "activate_this" module of the project's virtualenv in order to have access to the project settings and such. Using "print sys.path" before and after I execute activate_this.py, I can tell the python path changes to point to the virtualenv for the project. However, I still cannot import django.conf.settings. I have been able to successfully do this on *nix (Ubuntu and CentOS) and in Cygwin. Do you use this setup/workflow on Windows? If so Can you help me figure out why this wont work on Windows or provide any tips and tricks to get around this issue? Thanks and Cheers. REF: http://virtualenv.openplans.org/#id9 | Using Virtualenv without bin/python Local development environment: Python 2.5.4 Virtualenv 1.4.6 Fabric 0.9.0 Pip 0.6.1 Django 1.1.1 Windows XP (SP3)

    Read the article

  • Bash Templating: How to build configuration files from templates with Bash?

    - by FractalizeR
    Hello. I'm writting a script to automate creating configuration files for Apache and PHP for my own webserver. I don't want to use any GUIs like CPanel or ISPConfig. I have some templates of Apache and PHP configuration files. Bash script needs to read templates, make variable substitution and output parsed templates into some folder. What is the best way to do that? I can think of several ways. Which one is the best or may be there are some better ways to do that? I want to do that in pure Bash (it's easy in PHP for example) 1)http://stackoverflow.com/questions/415677/how-to-repace-variables-in-a-nix-text-file template.txt: the number is ${i} the word is ${word} script.sh: #!/bin/sh #set variables i=1 word="dog" #read in template one line at the time, and replace variables #(more natural (and efficient) way, thanks to Jonathan Leffler) while read line do eval echo "$line" done < "./template.txt" BTW, how do I redirect output to external file here? Do I need to escape something if variables contain, say, quotes? 2) Using cat & sed for replacing each variable with it's value: Given template.txt: The number is ${i} The word is ${word} Command: cat template.txt | sed -e "s/\${i}/1/" | sed -e "s/\${word}/dog/" Seems bad to me because of the need to escape many different symbols and with many variables the line will be tooooo long. Can you think of some other elegant and safe solution?

    Read the article

  • What's a good Java API for creating Word documents?

    - by Bill James
    I have a new app I'll be working on where I have to generate a Word document that contains tables, graphs, a table of contents and text. What's a good API to use for this? How sure are you that it supports graphs, ToCs, and tables? What are some hidden gotcha's in using them? Some clarifications: I can't output a PDF, they want a Word doc. They're using MS Word 2003 (or 2007), not OpenOffice Application is running on *nix app-server It'd be nice if I could start with a template doc and just fill in some spaces with tables, graphs, etc. Thanks for the help. Edit: Several good answers below, each with their own faults as far as my current situation. Hard to pick a "final answer" from them. Think I'll leave it open, and hope for better solutions to be created. Edit: The OpenOffice UNO project does seem to be closest to what I asked for. While POI is certainly more mainstream, it's too immature for what I want.

    Read the article

  • MongoDB - proper use of collections?

    - by zmg
    In Mongo my understanding is that you can have databases and collections. I'm working on a social-type app that will have blogs and comments (among other things) and had previously be using MySQL and pretty heavy partitioning in an attempt to limit possible concurrency issues. With MySQL I've stuffed all my user data into a _user database with several tables to further partition the data (blogs, pages, etc). My immediate reaction with Mongo would be to create a 'users' database with one collection per user. In this way user 'zach' blog entries would go into the 'zach' collection with associated comments and such becoming sub-objects in the same collection. Basically like dynamically creating one table per user in MySQL, but apparently without the complexity and limitations that might impose. Of course since I haven't really used Mongo before I'm having trouble gauging the (ahem..) quality of this idea and the potential problems it might cause down the road. I'd like user data to be treated a lot like a users directory in a *nix environment where user created/non-shared (mostly) gets put into one place (currently with MySQL that would be the appname_users as mentioned above). Most of the users data will be specific to the users page(s). Some of the user data which is queried across all site users (searchable user profiles) is currently kept in a separate database/table and I expect things like this could be put into a appname_system database and be broken up into collections and/or application specific databases (appname_profiles). Anyway, since the available documentation on this is currently a little thin and my experience is extremely limited I thought I might find a little guidance from someone with a better working understanding of the system. On the plus side I'd really already been attempting to treat MySQL as a schema-less document-store and doing this with Mongo seems much more intuitive/sane/rational so I'm really looking forward to getting started. Thanks, Zach

    Read the article

  • Port Win32 DLL hook to Linux

    - by peachykeen
    I have a program (NWShader) which hooks into a second program's OpenGL calls (NWN) to do post-processing effects and whatnot. NWShader was originally built for Windows, generally modern versions (win32), and uses both DLL exports (to get Windows to load it and grab some OpenGL functions) and Detours (to hook into other functions). I'm using the trick where Win will look in the current directory for any DLLs before checking the sysdir, so it loads mine. I have on DLL that redirects with this method: #pragma comment(linker, "/export:oldFunc=nwshader.newFunc) To send them to a different named function in my own DLL. I then do any processing and call the original function from the system DLL. I need to port NWShader to Linux (NWN exists in both flavors). As far as I can tell, what I need to make is a shared library (.so file). If this is preloaded before the NWN executable (I found a shell script to handle this), my functions will be called. The only problem is I need to call the original function (I would use various DLL dynamic loading methods for this, I think) and need to be able to do Detour-like hooking of internal functions. At the moment I'm building on Ubuntu 9.10 x64 (with the 32-bit compiler flags). I haven't been able to find much on Google to help with this, but I don't know exactly what the *nix community refers to it as. I can code C++, but I'm more used to Windows. Being OpenGL, the only part the needs modified to be compatible with Linux is the hooking code and the calls. Is there a simple and easy way to do this, or will it involve recreating Detours and dynamically loading the original function addresses?

    Read the article

  • Linker, Libraries & Directories Information

    - by m00st
    I've finished both my C++ 1/2 classes and we did not cover anything on Linking to libraries or adding additional libraries to C++ code. I've been having a hay-day trying to figure this out; I've been unable to find basic information linking to objects. Initially I thought the problem was the IDE (Netbeans; and Code::Blocks). However I've been unable to get wxWidgets and GTKMM setup. Can someone point me in the right direction on the terminology and basic information about #including files and linking files in a Cpp application? Basically I want/need to know everything in regards to this process. The difference between .dll, .lib, .o, .lib.a, .dll.a. The difference between a .h and a "library" (.dll, .lib correct?) I understand I need to read the compiler documentation I am using; however all compilers (that I know of) use linker and headers; I need to learn this information. Please point me in the right direction! :] So far on my quest I've found out: Linker links libraries already compiled to your project. .a files are static libraries (.lib in windows) .dll in windows is a shared library (.so in *nix) Thanks

    Read the article

  • Python, Ruby, and C#: Use cases?

    - by thaorius
    Hi everyone. For as long as I can remember, I've always had a "favorite" language, which I use for most projects, until, for some particular reason, there is no way/point on using it for project XYZ. At that point, I find myself rusty (and sometimes outdated) on other languages+libraries+toolchains. So I decided, I would just use some languages/libs/tools for some things, and some for other, effectively keeping them fresh (there would obviously be exceptions, I'm not looking for an arbitrary rule set, but some guidelines). I wanted an opinion on what would be your standard use cases (new projects) for Python, Ruby, and C# (Mono). At the moment, I have time like this:Languages: C#: Mid-Large Sized Projects (mainly server-side daemons) High Performance (I hardly ever need C's performance, but Python just doesn't cut it) Relatively Low Footprint (vs the JVM, for example) Ruby: Web Applications Python: General Use Scripts (automation, system config, etc) Small-Mid Sized Projects Prototyping Web Applications About Ruby, I have no idea what to use it for that I can't use Python for (specially considering Python is more easily found installed by default). And I like both languages (though I'm really new to Ruby), which makes things even worse. As for C#, I have not used a Windows powered computer in a few years, I don't make things for Windows computers, and I don't mind waiting for Mono to implement some new features. That being said, I haven't found many people on the internet using it for server-sided *nix programming (not web related). I would appreciate some insight on this too. Thanks for your time.

    Read the article

  • Looking to reimplement build toolchain from bash/grep/sed/awk/(auto)make/configure to something more

    - by wash
    I currently maintain a few boxes that house a loosely related cornucopia of coding projects, databases and repositories (ranging from a homebrew *nix distro to my class notes), maintained by myself and a few equally pasty-skinned nerdy friends (all of said cornucopia is stored in SVN). The vast majority of our code is in C/C++/assembly (a few utilities are in python/perl/php, we're not big java fans), compiled in gcc. Our build toolchain typically consists of a hodgepodge of make, bash, grep, sed and awk. Recent discovery of a Makefile nearly as long as the program it builds (as well as everyone's general anxiety with my cryptic sed and awking) has motivated me to seek a less painful build system. Currently, the strongest candidate I've come across is Boost Build/Bjam as a replacement for GNU make and python as a replacement for our build-related bash scripts. Are there any other C/C++/asm build systems out there worth looking into? I've browsed through a number of make alternatives, but I haven't found any that are developed by names I know aside from Boost's. (I should note that an ability to easily extract information from svn commandline tools such as svnversion is important, as well as enough flexibility to configure for builds of asm projects as easily as c/c++ projects)

    Read the article

  • How to inherit the current path when invoking Maven's exec-maven-plugin?

    - by wishihadabettername
    I have an <exec-maven-plugin> which calls an external command (in this case, svnversion). The command is in the path for the current user. However, when a separate shell is spawned by the plugin, the path is not initialized. I don't want to hardcode or define a variable for each external command (there would be too much to maintain, especially that there are both Windows and *nix users). My pom.xml contains the following: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.1</version> <executions> <execution> <id>svnversion-exec</id> <phase>process-resources</phase> <goals> <goal>exec</goal> </goals> <configuration> <executable>svnversion</executable> <arguments> <argument><![CDATA[ >version.txt ]]></argument> </arguments> </configuration> </execution> </executions> </plugin> Currently I get the following output: [INFO] [exec:exec {execution: svnversion-exec}] 'svnversion' is not recognized as an internal or external command, operable program or batch file. [ERROR] BUILD ERROR: Result of cmd.exe /X /C "svnversion >version.txt" execution is: '1'. Thank you!

    Read the article

  • System Calls in windows & Native API?

    - by claws
    Recently I've been using lot of Assembly language in *NIX operating systems. I was wondering about the windows domain. Calling convention in linux: mov $SYS_Call_NUM, %eax mov $param1 , %ebx mov $param2 , %ecx int $0x80 Thats it. That is how we should make a system call in linux. Reference of all system calls in linux: Regarding which $SYS_Call_NUM & which parameters we can use this reference : http://docs.cs.up.ac.za/programming/asm/derick_tut/syscalls.html OFFICIAL Reference : http://kernel.org/doc/man-pages/online/dir_section_2.html Calling convention in Windows: ??? Reference of all system calls in Windows: ??? Unofficial : http://www.metasploit.com/users/opcode/syscalls.html , but how do I use these in assembly unless I know the calling convention. OFFICIAL : ??? If you say, they didn't documented it. Then how is one going to write libc for windows without knowing system calls? How is one gonna do Windows Assembly programming? Atleast in the driver programming one needs to know these. right? Now, whats up with the so called Native API? Is Native API & System calls for windows both are different terms referring to same thing? In order to confirm I compared these from two UNOFFICIAL Sources System Calls: http://www.metasploit.com/users/opcode/syscalls.html Native API: http://undocumented.ntinternals.net/aindex.html My observations: All system calls are beginning with letters Nt where as Native API is consisting of lot of functions which are not beginning with letters Nt. System Call of windows are subset of Native API. System calls are just part of Native API. Can any one confirm this and explain.

    Read the article

  • Performance Comparison of Shell Scripts vs high level interpreted langs (C#/Java/etc.)

    - by dferraro
    Hi all, First - This is not meant to be a 'which is better, ignorant nonionic war thread'... But rather, I generally need help in making an architecture decision / argument to put forward to my boss. Skipping the details - I simply just would love to know and find the results of anyone who has done some performance comparisons of Shell vs [Insert General Purpose Programming Language (interpreted) here), such as C# or Java... Surprisingly, I have spent some time on Google on searching here to not find any of this data. Has anyone ever done these comparisons, in different use-cases; hitting a database like in a XYX # of loops doing different types of SQL (Oracle pref, but MSSQL would do) queries such as any of the CRUD ops - and also not hitting database and just regular 50k loop type comparison doing different types of calculations, and things of that nature? In particular - for right now, I need to a comparison of hitting an Oracle DB from a shell script vs, lets say C# (again, any GPPL thats interpreted would be fine, even the higher level ones like Python). But I also need to know about standard programming calculations / instructions/etc... Before you ask 'why not just write a quick test yourself? The answer is: I've been a Windows developer my whole life/career and have very limited knowledge of Shell scripting - not to mention *nix as a whole.... So asking the question on here from the more experienced guys would be grealty beneficial, not to mention time saving as we are in near perputual deadline crunch as it is ;). Thanks so much in advance,

    Read the article

  • How to find all initializations of instance variables in a Java package?

    - by Hank Gay
    I'm in the midst of converting a legacy app to Spring. As part of the transition, we're converting our service classes from an "instantiate new ones whenever you need one" style to a Springleton style, so I need a way to make sure they don't have any state. I'm comfortable on the *nix command-line, and I have access to IntelliJ (this strikes me as a good fit for Structural Search and Replace, if I could figure out how to use it), and I could track down an Eclipse install, if that would help. I just want to make absolutely sure I've found all the possible problems. UPDATE: Sorry for the confusion. I don't have a problem finding places where the old constructor was being called. What I'm looking for is a "bullet-proof" why to search all 100+ service classes for any sort of internal state. The most obvious one I could think of (and the only one I've really found so far) is cases where we use memoization in the classes, so they have instance variables that get initialized internally instead of via Spring. This means that when the same Springleton gets used for different requests, data can leak between them. Thanks.

    Read the article

  • The shell dotfile cookbook

    - by Jason Baker
    I constantly hear from other people about how much of the stuff they've used to customize their *nix setup they've shamelessly stolen from other people. So in that spirit, I'd like to start a place to share that stuff here on SO. Here are the rules: DON'T POST YOUR ENTIRE DOTFILE. Instead, just show us the cool stuff. One recipe per answer You may, however, post multiple versions of your recipe in the same answer. For example, you may post a version that works for bash, a version that works for zsh, and a version that works for csh in the same answer. State what shells you know your recipe will work with in the answer. Let's build this cookbook as a team. If you find out that an answer works with other shells other than the one the author posted, edit it in. If you like an idea and rewrite it to work with another shell, edit the modified version in to the original post. Give credit where credit is due. If you got your idea from someone else, give them credit if possible. And for those of you (justifiably) asking "Why do we need another one of these threads?": Most of what I've seen is along the lines of "post your entire dotfile." Personally, I don't want to try to parse through a person's entire dotfile to figure out what I want. I just want to know about all the cool parts of it. It's helpful to have a single dotfile thread. I think most of the stuff that works in bash will work in zsh and it may be adapted to work with csh fairly easily.

    Read the article

  • Bash script to find a directory, list it's contents and sub-folders info

    - by lithiumion
    Hi I want to write a script that will: 1- locate folder "store" on a *nix filesystem 2- move into that folder 3- print list of contents with last modification date 4- calculate sub-folders size This folder's absolute path changes from server to server, but the folder name remains the same always. There is a config file that contains the correct path to that folder though, but it doesn't give absolute bath to it. Sample Config: Account ON DIR-Store /hdd1 Scheduled YES ?According to the config file the absolute path would be "/hdd1/backup/store/" I need the script to grep the "/hdd1" or anything beyond the word "Config-Store", add "/backup/store/" to it, move into folder "store", print list of it's contents, and calculate sub-folders size. Until now I manually edit the script on each server to reflect the path to the "store" folder. Here is a sample script: #!/bin/bash echo " " echo " " echo "Moving Into Directory" cd /hdd1/backup/store/ echo "Listing Directory Content" echo " " ls -alh echo "*******************************" sleep 2 echo " " echo "Calculating Backup Size" echo " " du -sh store/* echo "********** Done! **********" I know I could use grep cat /etc/store.conf | grep DIR-Store Just don't know how to get around selecting the path, adding the "/backup/store/" and moving ahead. Any help will be appreciated

    Read the article

  • Static IP on FEDORA12 from Virtualbox

    - by Krazy_Kaos
    I'm trying to get my FEDORA12 to have an STATIC IP - inside virtualbox - inside Ubuntu Let me rephrase that. I have an Ubuntu 9.04 system with vitualbox and a FEDORA12 vm there and I would like to put the fedora with an STATIC IP (amahi needs it), but I'm getting stuck... I'm using NAT (if that's any help) I tryid a few tutorials, but no go. I'm kind of new to the *nix world but I'm old school on M$ Edit: Screenshots UBUNTU 9.04 (host that has the vm) FEDORA (sory cant post pics... not enough rep) INFO: GUEST WITH STATIC: IFCONFIG: eth0 Link encap:Ethernet HWaddr 08:00:27:35:CC:DE inet addr:192.168.1.55 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe35:ccde/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7 errors:0 dropped:0 overruns:0 frame:0 TX packets:2764 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:574 (574.0 b) TX bytes:127121 (124.1 KiB) Interrupt:11 Base address:0xc020 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1856 errors:0 dropped:0 overruns:0 frame:0 TX packets:1856 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:181587 (177.3 KiB) TX bytes:181587 (177.3 KiB) NETSTAT -NR: Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 192.168.2.1 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 GUEST WITH DHCP: IFCONFIG: eth0 Link encap:Ethernet HWaddr 08:00:27:35:CC:DE inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe35:ccde/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:105 errors:0 dropped:0 overruns:0 frame:0 TX packets:2966 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:49787 (48.6 KiB) TX bytes:149969 (146.4 KiB) Interrupt:11 Base address:0xc020 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1903 errors:0 dropped:0 overruns:0 frame:0 TX packets:1903 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:185931 (181.5 KiB) TX bytes:185931 (181.5 KiB) NETSTAT -NR: Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 10.0.2.2 0.0.0.0 UG 0 0 0 PS.: I'm still trying to workout the sudoer file to be able to exec the iptables command

    Read the article

  • Multi-Role Domain Controllers for Small Offices (< 50 clients)

    - by kce
    Warning: I'm a Linux/*NIX admin so this is all new to me. I understand that it's not considered a good idea to have only a single domain controller, and that it is also probably a good idea for a domain controller to only do AD/DHCP/DNS (Here). We have two offices, location A with 30 users and location B with 10 users. Our two offices are separated by a WAN that is not particularly robust so I have be instructed that we need to have standalone services in each office. This means that according to "best practices" we will need to build a domain controller and a separate file server in each office. Again, I am not knowledgeable in the ways of Windows but this seems a little unnecessary for an organization of 40 users. People have commented that I could "get away with" running file services on the domain controller as long as the "load is light". That just seems to generate more questions than it answers. What constitutes light load? What are the potential consequences of mixing these roles? Ideally I would prefer to only have one physical machine at each location. The one in location A (the location with IT staff) can act as the primary domain controller and the one in the smaller office can act as the backup domain controller. If either domain controller fails we can still use the other one for authentication (albeit with some latency) and if the WAN connection fails each office still has access to their respective "local" domain controller. If the file services are ALSO run on each server (and synchronized with something like DFS), a similar arrangement in terms of redundancy can be had without having to purchase, build and install two additional separate servers. It's not that I'm adverse to that (well, any more adverse than I am to whole thing to begin with) but to my simple mind it just seems, well a bit overkill. I can definitely see the benefits of functional separation when we're talking larger organizations, but I need to consider the additional overhead too. None of this excludes having a DRP setup for the domain controller/s. I assume you can lose two domain controllers just as easily as one.

    Read the article

  • How do I correct a directory incorrectly copied into itself?

    - by Peter Boughton
    Given the following situation... <path>/mydir1/mydir2 ...where mydir2 should have overwritten mydir1, but was instead placed inside, and both directories actually have the same filename. How is that fixed? Attempting to do mv <path>/mydir/mydir/* <path>/mydir/ or mv <path>/mydir <path>/ results in: mv: cannot move `<path>/mydir/mydir` to a subdirectory of itself, `<path>/mydir` This seems stupidly simple, but it's late here and I can't figure it out. There are seventeen such directories to fix (path differs for each, but same mydir name). To confirm, the error message can be caused with this: # cd /path/to/directory # mv mydir/mydir ./ mv: cannot move `mydir/mydir' to a subdirectory of itself, `./mydir' Also tried: # mv mydir/mydir/* mydir/ mv: cannot move `mydir/mydir/otherdir1' to a subdirectory of itself, `mydir/otherdir1' mv: cannot move `mydir/mydir/otherdir2' to a subdirectory of itself, `mydir/otherdir2' and... # mv /path/to/directory/mydir/mydir/otherdir1 /path/to/directory/mydir/ mv: cannot move `/path/to/directory/mydir/mydir/otherdir1' to a subdirectory of itself, `/path/to/directory/mydir/otherdir1' and using a temporary directory: # mv mydir/mydir ./mydir-temp # mv mydir-temp/* mydir/ mv: cannot move `mydir-temp/otherdir1' to a subdirectory of itself, `mydir/otherdir1' mv: cannot move `mydir-temp/otherdir2' to a subdirectory of itself, `mydir/otherdir2' I found a similar question "How to recursively move all files (including hidden) in a subfolder into a parent folder in *nix?" which suggested that mv bar/{,.}* . would do this. But this also gives the same errors, as well as confusingly picking up . and .. from somewhere. # cd mydir # mv mydir/{,.}* . mv: cannot move `mydir/otherdir1' to a subdirectory of itself, `./otherdir1' mv: cannot move `mydir/otherdir2' to a subdirectory of itself, `./otherdir2' mv: cannot move `mydir/.' to `./.': Device or resource busy mv: cannot move `mydir/..' to `./..': Device or resource busy mv: overwrite `./.file'? y Another similar question "linux mv command weirdness" suggests that mv doesn't overwrite and a copy is required. # cd mydir # cp -rf ./mydir/* ./ cp: overwrite `./otherdir1/file1'? y cp: overwrite `./otherdir1/file2'? y cp: overwrite `./otherdir1/file3'? This appears to be working... except there's a lot of files (and dirs) - I don't want to confirm every one! Isn't the f there supposed to prevent this? Ok, so cp was aliased to cp -i (which I found out with type cp), and bypassed by using \cp -rf ./mydir/* ./ which seems to have worked. Although I've solved the problem of getting dirs/files from one place to another, I'm still curious as to what's going on with the mv stuff - is this really a deliberate feature as suggested by Warner?

    Read the article

  • When running a shell script, how can you protect it from overwriting or truncating files?

    - by Joseph Garvin
    If while an application is running one of the shared libraries it uses is written to or truncated, then the application will crash. Moving the file or removing it wholesale with 'rm' will not cause a crash, because the OS (Solaris in this case but I assume this is true on Linux and other *nix as well) is smart enough to not delete the inode associated with the file while any process has it open. I have a shell script that performs installation of shared libraries. Sometimes, it may be used to reinstall versions of shared libraries that were already installed, without an uninstall first. Because applications may be using the already installed shared libraries, it's important the the script is smart enough to rm the files or move them out of the way (e.g. to a 'deleted' folder that cron could empty at a time when we know no applications will be running) before installing the new ones so that they're not overwritten or truncated. Unfortunately, recently an application crashed just after an install. Coincidence? It's difficult to tell. The real solution here is to switch over to a more robust installation method than an old gigantic shell script, but it'd be nice to have some extra protection until the switch is made. Is there any way to wrap a shell script to protect it from overwriting or truncating files (and ideally failing loudly), but still allowing them to be moved or rm'd? Standard UNIX file permissions won't do the trick because you can't distinguish moving/removing from overwriting/truncating. Aliases could work but I'm not sure what entirety of commands need to be aliased. I imagine something like truss/strace except before each action it checks against a filter whether to actually do it. I don't need a perfect solution that would work even against an intentionally malicious script. Ideas I have so far: Alias cp to GNU cp (not the default since I'm on Solaris) and use the --remove-destination option. Alias install to GNU install and use the --backup option. It might be smart enough to move the existing file to the backup file name rather than making a copy, thus preserving the inode. "set noclobber" in ~/.bashrc so that I/O redirection won't overwrite files

    Read the article

  • What are incentives (if any) to use WinRT instead of .Net?

    - by Ark-kun
    Let's compare WinRT with .Net .Net .Net is the 13+ years evolution of COM. Three main parts of .Net are execution environment, standard libraries and supported languages. CLR is the native-code execution environment based on COM .Net Framework has a big set of standard libraries (implemented using managed and native code) that can be used from all .Net languages. There are .Net classes that allow using OS APIs. WPF or Silverlight provide a XAML-based UI framework .Net can be used with C++, C#, Javascript, Python, Ruby, VB, LISP, Scheme and many other languages. C++/.Net is a variation of the C++ language that allows interaction with .Net objects. .Net supports inheritance, generics, operator and method overloading and many other features. .Net allows creating apps that run on Windows (XP, 7, 8 Pro (Desktop and Metro), RT, CE, etc), Mac OS, Linux (+ other *nix); iOS, Android, Windows Phone (7, 8); Internet Explorer, Chrome, Firefox; XBox 360, Playstation Suite; raw microprocessors. There is support for creating games (2D/3D) using any managed language or C++. Created by Developer Division WinRT WinRT is based on COM. Three main parts of WinRT are execution environment, standard libraries and supported languages. WinRT has a native-code execution environment based on COM WinRT has a set of standard libraries that more or less can be used from WinRT languages. There are WinRT classes that allow using OS APIs. Unnamed Silverlight clone provides a XAML-based UI framework WinRT can be used with C++, C#, Javascript, VB. C++/CX is a variation of the C++ language that allows interaction with WinRT objects. Custom WinRT components don't support inheritance (classes must be sealed), generics, operator overloading and many other features. WinRT allows creating apps that run on Windows 8 Pro and RT (Metro only); Windows Phone 8 (limited). There is support for creating games (2D/3D) using C++ only. Ordered by Windows Team I think that all the aspects except the last ones are very important for developers. On the other hand it seems that the most important aspect for Microsoft is the last one. So, given the above comparison of conceptually identical technologies, what are incentives (if any) to use WinRT instead of .Net?

    Read the article

< Previous Page | 9 10 11 12 13 14  | Next Page >