Search Results

Search found 5786 results on 232 pages for 'umbraco scripts'.

Page 177/232 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • Oracle GoldenGate 11gR2 Event Marker System

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Oracle GoldenGate 11gR2 includes a number of refinements to the Event Marker system. Using event markers enables GoldenGate processes to take a defined action based on an event in the data stream. This feature within Oracle GoldenGate simplifies methods to embed specific custom processing in the areas of error handling, alerts, and notification. The event marker system effectively allows for DML driven workflows to be created within GoldenGate and enables customers to craft non-standard processing based on special events. There are a number of supported event actions including: trace, log, checkpoint before, suspend, abort, and several others. With 11gR1 events can now be triggered by DDL operations, plus variables can be passed in and out of the system to shell scripts. Some good use cases for this feature are Automatic switchover to the secondary system during planned outages Better monitoring over source systems’ performance and automated switchover to the standby system in case of an outage with the primary system Automatic switchover from initial load to changed data movement Automatic synchronization of any type of batch processing taking place on both the source and target databases for database consistency Automatic stoppage of the Delivery module to allow end-of-day reporting Finding, tracking, and reporting on transactions that are of interest including the ones that do not have primary keys or transaction record numbers If you would like to see a demo, please visit our youtube channel (http://youtube.com/oraclegoldengate)  To learn more about the new features of Oracle GoldenGate 11gR2 and to ask questions to the PM team, please join us on September 12th  8am or 10am PST for our live webcast. Click here to register.

    Read the article

  • determine if udp socket can be accessed via external client

    - by JohnMerlino
    I don't have access to company firewall server. but supposedly the port 1720 is open on my one ubuntu server. So I want to test it with netcat: sudo nc -ul 1720 The port is listening on the machine ITSELF: sudo netstat -tulpn | grep nc udp 0 0 0.0.0.0:1720 0.0.0.0:* 29477/nc The port is open and in use on the machine ITSELF: lsof -i -n -P | grep 1720 gateway 980 myuser 8u IPv4 187284576 0t0 UDP *:1720 Checked the firewall on current server: sudo ufw allow 1720/udp Skipping adding existing rule Skipping adding existing rule (v6) sudo ufw status verbose | grep 1720 1720/udp ALLOW IN Anywhere 1720/udp ALLOW IN Anywhere (v6) But I try echoing data to it from another computer (I replaced the x's with the real integers): echo "Some data to send" | nc xx.xxx.xx.xxx 1720 But it didn't write anything. So then I try with telnet from the other computer as well: telnet xx.xxx.xx.xxx 1720 Trying xx.xxx.xx.xxx... telnet: connect to address xx.xxx.xx.xxx: Operation timed out telnet: Unable to connect to remote host Although I don't think telnet works with udp sockets. I ran nmap from another computer within the same local network and this is what I got: sudo nmap -v -A -sU -p 1720 xx.xxx.xx.xx Starting Nmap 5.21 ( http://nmap.org ) at 2013-10-31 15:41 EDT NSE: Loaded 36 scripts for scanning. Initiating Ping Scan at 15:41 Scanning xx.xxx.xx.xx [4 ports] Completed Ping Scan at 15:41, 0.10s elapsed (1 total hosts) Initiating Parallel DNS resolution of 1 host. at 15:41 Completed Parallel DNS resolution of 1 host. at 15:41, 0.00s elapsed Initiating UDP Scan at 15:41 Scanning xtremek.com (xx.xxx.xx.xx) [1 port] Completed UDP Scan at 15:41, 0.07s elapsed (1 total ports) Initiating Service scan at 15:41 Initiating OS detection (try #1) against xtremek.com (xx.xxx.xx.xx) Retrying OS detection (try #2) against xtremek.com (xx.xxx.xx.xx) Initiating Traceroute at 15:41 Completed Traceroute at 15:41, 0.01s elapsed NSE: Script scanning xx.xxx.xx.xx. NSE: Script Scanning completed. Nmap scan report for xtremek.com (xx.xxx.xx.xx) Host is up (0.00013s latency). PORT STATE SERVICE VERSION 1720/udp closed unknown Too many fingerprints match this host to give specific OS details Network Distance: 1 hop TRACEROUTE (using port 1720/udp) HOP RTT ADDRESS 1 0.13 ms xtremek.com (xx.xxx.xx.xx) Read data files from: /usr/share/nmap OS and Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 2.04 seconds Raw packets sent: 27 (2128B) | Rcvd: 24 (2248B). The only thing I can think of is a firewall or vpn issue. Is there anything else I can check for before requesting that they look at the firewall server again?

    Read the article

  • How can I be prepared to join a company?

    - by Aerovistae
    There's more to it than that, but this title was the best way I could think of to sum it up. I'm a senior in a good computer science program, and I'm graduating early. About to start interviews and all whatnot. I'm not a super-experienced programmer, not one of those people who started in middle school. I'm decent at this, but I'm not among the best, not nearly. I have to do an awful lot of googling. So today I'm meeting some fellow for lunch at a campus cafe to discuss some front-end details when this tall, good-looking guy begs pardon, says he's new to campus, says he's wondering if we know where he can go to sign up for recruiting developers. Quickly evolves into long conversation: he's the CEO of a seems-to-be-doing-well start-up. Hiring passionate interns and full-times. Sounds great! I take one look at his site on my own computer later, immediately spot a major bug. No idea how to fix it, but I see it. I go over to the page code, and good god. It's the standard amount of code you would expect from a full-scale web application, a couple dozen pages of HTML and scripts. I don't even know where to start reading it. I've built sites from scratch, but obviously never on that scale, nor have I ever worked on one of that scale. I have no idea which bit might generate the bug. But that sets me thinking: How could someone like me possibly settle into an environment like that? A start-up is a very high-pressure working environment. I don't know if I can work at that pace under those constraints-- I would hate to let people down. And with only 10 employees, it's not like anyone has much time to help you get your bearings. Somewhere in there is a question. Can you see it? I'm asking for general advice here. Maybe even anecdotal advice. Is joining a start-up right out of college a scary process? Am I overestimating what it would take to figure out the mass of code behind this site? What's the likelihood a decent but only moderately-experienced coder could earn his pay at such a place? For instance, I know nothing of server-side/back-end programming. Never touched it. That scares me.

    Read the article

  • Running a program on boot without login, using the screen

    - by configurator
    Preface: I have a server running on an old laptop. The screen is always on with a login prompt, but because its keyboard is in pretty bad shape, I use it exclusively via ssh. The screen is in a good position, though; I want to use it to display a clock and some stats about what my server is doing. I have scripts to display all those things, but I want to always show them on the monitor screen. My question is, how do I get my script (called HUD) to run on /dev/tty1, instead of the login prompt. Hopefully, it should be possible to accept keyboard input as well as display its output, so that it can use the keyboard to show more info where needed in a future version. I'd also like tty2 etc. to remain active as login screens, in face I actually do need to login locally. For a start, I tried creating a script that I can run from ssh to start the HUD. It goes something like this: ( flock -n 9 watch --interval 0.2 --precise --color --notitle --exec /path/to/script & disown ) 9> /var/lock/hud > /dev/tty1 2> /dev/tty1 < /dev/tty1 (I had to use & disown instead of nohup because nohup recognized the tty and redirects output to nohup.out instead.) This sort-of works. However, it has a few issues: It doesn't steal the terminal's keyboard input, so you can't ctrl+c to get out of it (nor change the script to actually use the keyboard input), and if you press enter it show it and scrolls the display, never refreshing it correctly afterwards. Oddly, if I disconnect the ssh session which created it, it stops working and shows a message: exec: No such file or directory. If I reconnect to ssh, it resumes functioning properly. It feels hackish. Is there a better way to do this? How?

    Read the article

  • core.* files eating up server space (~50MB)

    - by skytreader
    I'm renting server space from someone and, upon logging in my control panel after quite sometime, noticed an abnormal spike (~50MB) in the disk usage. Upon investigating, I found a lot of core.* files scattered around my public_html directory. Each one is more than 5MB in size but no more than 6MB. The * part is all numbers (in programming regex, that should be core\.\d+). I downloaded one and checked the contents. There was a lot of balderdash characters (NUL mostly, but also a scattering of ETB, ETX, STX) but there's this block of readable text which says: This text is part of the internal format of your mail folder, and is not a real message. It is created automatically by the mail system software. If deleted, important folder data will be lost, and it will be re-created with the data reset to initial values. Pretty self-explanatory. A few blocks above the text are some more readable messages that look like logs but is sandwiched in between non printable characters. I've extracted some below. Scan not valid for mh mailboxes Bogus character 0x%x in news state Can't rewrite news state %.80s Error closing backup news state %.80s No state for newsgroup %.80s found Now, a few concerns: Am I under attack? The messages seem to be about my webmail but I don't use my personal webmail that much---only for a vanity email address and an inbox for an outdated comments system. However, lately, I seem to notice a spike in the spam for my vanity mail. (Note: the comments system is covered by a captcha but every now and then some get through. My vanity email has a spam filter but it isn't as good as I'd like). Next, if this is a feature, can I turn it off? Is it advisable to? I've only 150MB so you see why I'm fretting over a 50MB spike. Some final details: my only server-side scripts are in PHP. The directory which accumulated the most number of these core files is the one containing the Wordpress-managed subdomain of my site. I manage my server through CPanel. Lastly, I decided to delete this files and after some checking nothing seems amiss in my websites nor in my mail. They are indeed the ones responsible for the ~50MB spike as my disk space usage is back to expected.

    Read the article

  • Are there new flexible texteditors? [closed]

    - by RParadox
    Vi(m) and Emacs are almost 40 years old. Why are they still the standard, and what attempts have been made at coming up with a new flexible editor? Are there any features which can not be built into vim/emacs? My question is similar to this one: Time to drop Emacs and vi? No one had a suggestion, which surprises me. I would have thought that the core of a texteditor is very small and that people brew their own. Perhaps nobody wants to deal with supporting all the modes? Edit to clarify my question: Instead of writing modes and scripts I ask myself, why there is not a much lightweight project, which lets people custom the editor more directly? Vim has 365395 LOC (C lines all included), Emacs 1.5 million LOC. Why is there a project with say 50k LOC, which is the core, why people can use more freely? Perhaps there is such project, I haven't looked very far. I thought about putting together modules from Vim myself and experimenting with some ideas. The core of editing shouldn't be more than 10k? Vim has a lot optimizations which is really an overkill nowadays. Basically I'm looking for a code base for my own editor and Vi/Emacs are I believe not intended to be used in this way. Bill Joy said the following about vi. http://web.cecs.pdx.edu/~kirkenda/joy84.html The fundamental problem with vi is that it doesn't have a mouse and therefore you've got all these commands. In some sense, its backwards from the kind of thing you'd get from a mouse-oriented thing. I think multiple levels of undo would be wonderful, too. But fundamentally, vi is still ed inside. You can't really fool it. Its like one of those pinatas - things that have candy inside but has layer after layer of paper mache on top. It doesn't really have a unified concept. I think if I were going to go back - I wouldn't go back, but start over again.

    Read the article

  • How to set up an rsync backup to Ubuntu securely?

    - by ws_e_c421
    I have been following various other tutorials and blog posts on setting up a Ubuntu machine as a backup "server" (I'll call it a server, but it's just running Ubuntu desktop) that I push new files to with rsync. Right now, I am able to connect to the server from my laptop using rsync and ssh with an RSA key that I created and no password prompt when my laptop is connected to my home router that the server is also connected to. I would like to be able to send files from my laptop when I am away from home. Some of the tutorials I have looked at had some brief suggestions about security, but they didn't focus on them. What do I need to do to let my laptop with send files to the server without making it too easy for someone else to hack into the server? Here is what I have done so far: Ran ssh-keygen and ssh-copy-id to create a key pair for my laptop and server. Created a script on the server to write its public ip address to a file, encrypt the file, and upload to an ftp server I have access to (I know I could sign up for a free dynamic DNS account for this part, but since I have the ftp account and don't really need to make the ip publicly accessible I thought this might be better). Here are the things I have seen suggested: Port forwarding: I know I need to assign the server a fixed ip address on the router and then tell the router to forward a port or ports to it. Should I just use port 22 or choose a random port and use that? Turn on the firewall (ufw). Will this do anything, or will my router already block everything except the port I want? Run fail2ban. Are all of those things worth doing? Should I do anything else? Could I set up the server to allow connections with the RSA key only (and not with a password), or will fail2ban provide enough protection against malicious connection attempts? Is it possible to limit the kinds of connections the server allows (e.g. only ssh)? I hope this isn't too many questions. I am pretty new to Ubuntu (but use the shell and bash scripts on OSX). I don't need to have the absolute most secure set up. I'd like something that is reasonably secure without being so complicated that it could easily break in a way that would be hard for me to fix.

    Read the article

  • ??????WLST

    - by Masa Sasaki
    WebLogic Server?????????????WebLogic Server????????7?23?????????38?WebLogic Server???@??????????????WLST?(?????? ??????????? ?? ??)??????????????WLST(WebLogic Scripting Tool)?WebLogic Server???????????????????????????????????WLST?????TIPS??????????????????????WLST????????????????????????????????????????????·????????????(?????? Fusion Middleware?????? ??? ??) WLST?? WebLogic Scripting Tool???WebLogic ??????????????????????????·?????????? WebLogic Server 9.x???????????????????Java??????·?????????Jython??????????? WLST???????????????????????????Jython??????WebLogic Server??????????????(????)????????WLST?Jython??????????????????????????? WLST???????? WLST?????????????????????????WebLogic????????????????????????????????·????????????????·??????(???????????)???????? (WLST????????????????????????????????????????????) WLST??????Java Management Extensions (JMX)???????????????JMX??????????????????????????????????Bean (MBean)??????????????????????? WLST???????? WLST??????WebLogic Server?????????????????·??????????????????????????????????????????????????????????????????????????????????????????????????????????WLST??????WebLogic Server??????????JMX????????????????????????????????WebLogic???????????????WLST????????????????????? ???????????·???????????? ????? ???????????·???·??????WLST?????????????????????????????????????????????????????????????????????????????????????????????????????WLST??????????????????????????????????????????? ?????·??? ???.py????????·????(?????filename.py)?WLST?????????????????????????????WLST???????????????????????????·?????Jython??????????????????????? ?????? ?????????Java?????WLST??????????????????????WLST????????????????????????????????·????????WLST???????????????????????????? WLST?MBean WebLogic Server?????(?????????)?JMX(Java Management Extension)?????????????JMX???????????????Bean(MBean)?????? MBean?????? MBean????????????????????????????????MBean???????????????????????????????????????????????????????????????????? WLST????????? ? ?????? ? ???????????(MBean???)??? ? ?????????? - ??????? - ?????????? - ?? - ?????? WLST???? ??·??·?????WLST???????????????????????????????????????????WLST??????????????? TIPS?MBean???? TIPS?????????????????WLST????????????????????????????????????????????(SSL)?????????????????????????????????SSL??????????????????????????????????????????????MBean????????WLST ls?????WLST find?????JRockit Mission Control?config.xml???????????????????WebLogic Server MBean Reference????????????????? ??? WLST??????????????????????????????????????????????????????????????????ThreadPoolRuntimeMBean??????JMS?????????WLST??????????????? ??????????????????????????? WLST ????????? $WL_HOME/samples/server/examples/src/examples/wlst/online WLST????????????? $WL_HOME/common/templates/scripts/wlst ???????????????????????????????????????????????????????????????????WLST???????????????????·??????????WLST????????????????????? ?????? WebLogic Server??? WebLogic Server?????????WebLogic Server?????! WebLogic Server??????(???????????) WebLogic Server???????? WebLogic Server??????

    Read the article

  • Networking stopped working on Ubuntu

    - by 1337Rooster
    I installed Ubuntu 10.04 through the Wubi installer (Funny, I installed it today and thought I would have gotten 10.10). I had a network connection and everything was working fine. I rebooted my coumputer a couple of times and then suddenly, I could not connect to the network and when I click the wireless/networking icon it says "Networking Disabled". I reinstalled Ubuntu and the problem went away. After a few reboots the problem returned. I have tried restarting to see if it would come back as well as a few other things listed below. Any other suggestions would be appreciated. Tried to restart networking via /etc/init.d/networking: amato@ubuntu:~$ sudo /etc/init.d/networking restart * Reconfiguring network interfaces... Ignoring unknown interface eth0=eth0. [ OK ] Tried to stop and start it: amato@ubuntu:~$ sudo /etc/init.d/networking stop * Deconfiguring network interfaces... [ OK ] amato@ubuntu:~$ amato@ubuntu:~$ sudo /etc/init.d/networking start Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service networking start Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the start(8) utility, e.g. start networking networking stop/waiting Tried start networking: amato@ubuntu:~$ start networking start: Rejected send message, 1 matched rules; type="method_call", sender=":1.58" (uid=1000 pid=2241 comm="start) interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply=0 destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")) amato@ubuntu:~$ sudo start networking networking stop/waiting Tried service networking restart: amato@ubuntu:~$ service networking restart restart: Rejected send message, 1 matched rules; type="method_call", sender=":1.60" (uid=1000 pid=2248 comm="restart) interface="com.ubuntu.Upstart0_6.Job" member="Restart" error name="(unset)" requested_reply=0 destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")) amato@ubuntu:~$ sudo service networking restart restart: Unknown instance: Here are the contents of my /etc/network/interfaces. auto lo iface lo inet loopback I even tried to modify it to this (based on something I read, online, not sure if I was doing the right thing here). Tried everything again and no luck: auto lo eth0 iface lo inet loopback iface eth0 inet dhcp

    Read the article

  • AWS Cloud Formation.Requires capabilities : [CAPABILITY_IAM] (Child Stack)

    - by Drew Khoury
    I'm running a CloudFormation template in the AWS Console. Running Stack Directly I started with a template that used IAM resources, and the console prompts me to acknowledge IAM capabilities when running the stack directly. Running Stack as a child I then tried to call the same stack from a parent stack and did not receive the same prompt. The stack then failed with the message: Requires capabilities : [CAPABILITY_IAM] Research The docs indicate that I can run CF scripts in a number of ways. There's plenty of docs around CLI/API and supplying the capability parameter, but there appears to be no information about how to make sure it's applied when running through the console. http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html IAM Resources in AWS CloudFormation Templates CF Console CLI API What I've done / What I think I've raised an issue via the forum for now, but no response (yet): https://forums.aws.amazon.com/thread.jspa?threadID=139160 I suspect this is a bug in the Console, as there doesn't appear to be any documentation of how to change the behaviour via the console and as far as I'm aware this should just work. Anyone came across the same problem, or can report that it's working fine for them?

    Read the article

  • Getting client denied when accessing a wsgi graphite script

    - by Dr BDO Adams
    I'm trying to set up graphite on my Mac OS X 10.7 lion, i've set up apache to call the python graphite script via WSGI, but when i try to access it, i get a forbiden from apache and in the error log. "client denied by server configuration: /opt/graphite/webapp/graphite.wsgi" I've checked that the scripts location is allowed in httpd.conf, and the permissions of the file, but they seem correct. What do i have to do to get access. Below is the httpd.conf, which is nearly the graphite example. <IfModule !wsgi_module.c> LoadModule wsgi_module modules/mod_wsgi.so </IfModule> WSGISocketPrefix /usr/local/apache/run/wigs <VirtualHost _default_:*> ServerName graphite DocumentRoot "/opt/graphite/webapp" ErrorLog /opt/graphite/storage/log/webapp/error.log CustomLog /opt/graphite/storage/log/webapp/access.log common WSGIDaemonProcess graphite processes=5 threads=5 display-name='%{GROUP}' inactivity-timeout=120 WSGIProcessGroup graphite WSGIApplicationGroup %{GLOBAL} WSGIImportScript /opt/graphite/conf/graphite.wsgi process-group=graphite application-group=%{GLOBAL} # XXX You will need to create this file! There is a graphite.wsgi.example # file in this directory that you can safely use, just copy it to graphite.wgsi WSGIScriptAlias / /opt/graphite/webapp/graphite.wsgi Alias /content/ /opt/graphite/webapp/content/ <Location "/content/"> SetHandler None </Location> # XXX In order for the django admin site media to work you Alias /media/ "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site- packages/django/contrib/admin/media/" <Location "/media/"> SetHandler None </Location> # The graphite.wsgi file has to be accessible by apache. <Directory "/opt/graphite/webapp/"> Options +ExecCGI Order deny,allow Allow from all </Directory> </VirtualHost> Can you help?

    Read the article

  • How can I set IIS Application Pool recycle times without resorting to the ugly syntax of Add-WebConfiguration?

    - by ObligatoryMoniker
    I have been scripting the configuration of our IIS 7.5 instance and through bits and pieces of other peoples scripts I have come up with a syntax that I like: $WebAppPoolUserName = "domain\user" $WebAppPoolPassword = "password" $WebAppPoolNames = @("Test","Test2") ForEach ($WebAppPoolName in $WebAppPoolNames ) { $WebAppPool = New-WebAppPool -Name $WebAppPoolName $WebAppPool.processModel.identityType = "SpecificUser" $WebAppPool.processModel.username = $WebAppPoolUserName $WebAppPool.processModel.password = $WebAppPoolPassword $WebAppPool.managedPipelineMode = "Classic" $WebAppPool.managedRuntimeVersion = "v4.0" $WebAppPool | set-item } I have seen this done a number of different ways that are less terse and I like the way this syntax of setting object properties looks compared to something like what I see on TechNet: Set-ItemProperty 'IIS:\AppPools\DemoPool' -Name recycling.periodicRestart.requests -Value 100000 One thing I haven't been able to figure out though is how to setup recycle schedules using this syntax. This command sets ApplicationPoolDefaults but is ugly: add-webconfiguration system.applicationHost/applicationPools/applicationPoolDefaults/recycling/periodicRestart/schedule -value (New-TimeSpan -h 1 -m 30) I have done this in the past through appcmd using something like the following but I would really like to do all of this through powershell: %appcmd% set apppool "BusinessUserApps" /+recycling.periodicRestart.schedule.[value='01:00:00'] I have tried: $WebAppPool.recycling.periodicRestart.schedule = (New-TimeSpan -h 1 -m 30) This has the odd effect of turning the .schedule property into a timespan until I use $WebAppPool = get-item iis:\AppPools\AppPoolName to refresh the variable. There is also $WebappPool.recycling.periodicRestart.schedule.Collection but there is no add() function on the collection and I haven't found any other way to modify it. Does anyone know of a way I can set scheduled recycle times using syntax consistent with the code I have written above?

    Read the article

  • Yum update not working on CentOS 6.2 minimal install

    - by Owen
    Note: This is my first question on the stack exchange network so please give mercy and provide guidance where needed. I have installed a CentOS 6.2 KVM guest and I am having problem getting yum to work. This is my first time working with CentOS so I feel that it's a setting somewhere that I am missing but cannot find using google. Here are my steps; Downloaded CentOS-6.2-x86_64-minimal.iso, booted, and went through default steps (only questions asked where keyboard, timezone, root password and use entire hdd) Restarted, logged in, pinged google.com to no avail Set the following settings; vi /etc/resolv.conf nameserver 8.8.8.8 nameserver 8.8.4.4 vi /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE="eth0" HWADDR="52:54:00:42:1B:4A" #NM_CONTROLLED="yes" BOOTPROTO=none ONBOOT="yes" NETMASK=255.255.255.0 IPADDR=192.168.122.151 TYPE=Ethernet vi /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=no HOSTNAME=server3.example.com GATEWAY=192.168.122.1 I can now ping google.com ping google.com PING google.com (173.194.70.139) 56(84) bytes of data. 64 bytes from fa-in-f139.1e100.net (173.194.70.139): icmp_seq=1 ttl=50 time=5.88 ms 64 bytes from fa-in-f139.1e100.net (173.194.70.139): icmp_seq=2 ttl=50 time=5.77 ms But I cannot 'yum update' yum update Loaded plugins: fastestmirror, presto Loading mirror speeds from cached hostfile Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=6&arch=x86_64&repo=os error was 14: PYCURL ERROR 7 - "Failed to connect to 2a01:c0:2:4:216:3eff:fe0d:266d: Network is unreachable" Error: Cannot find a valid baseurl for repo: base My KVM guest is also NAT'd incase it's of concern.

    Read the article

  • WDS's MDT DeploymentShare and REMINST replicated with DFS-R does not read WIM from local WDS

    - by mbrownnyc
    I've read several guides on using DFS-R with WDS and MDT to replicate REMINST and DeploymentShare, and I have a particularly strange problem. On the receiving server, after configuring WDS and mounting the DeploymentShare into MDT's DeploymentWorkbench, I also performed the following: 1) in .\Control\Bootstrap.ini, changed DeployRoot to \%wdsserver%\DeploymentShare$ 2) Changed the UNC path at the root of the MDT Deployment Share in the DeploymentWorkbench to match that of the current server. 3) In Unattend.xml files located: .\Control**, modified the following value to match the current server: <cpi:offlineImage catelog://HOST/ I am able to boot and grab the LiteTouch PE image off the local WDS TFTP server, but the WIM files, the scripts, everything else is being pulled off the WDS server at the remote site (the original WDS server that was the source of the files within the DFS-R replicated folder). What do I do in order to solve this problem? I've grepped all the files below the DeploymentShare to look for instances of the hostname of the WDS server at the remote site (the source of the files), but I found none. Here are the guides I referred to: http://technet.microsoft.com/en-us/library/cc771324%28WS.10%29.aspx http://blogs.technet.com/b/askds/archive/2009/12/16/wds-and-dfsr-love-at-first-sync.aspx http://oasysadmin.com/2011/11/03/copying-moving-and-replicating-the-mdt-2010-deployment-share/

    Read the article

  • powershell v2 remoting - How do you enable unecrypted traffic

    - by Peter Walke
    I'm writing a powershell v2 script that I'd like to run against a remote server. When I run it, I get the error : Connecting to remote server failed with the following error message : The WinRM client cannot process the request. Unencrypted traffic is currently disabled in the client configuration. Change the client configurati on and try the request again. For more information, see the about_ Remote_Troubleshooting Help topic. I looked at the online help for about _ Remote_Troubleshooting, but it didn't point me towards how to enable unecrypted traffic. Below is the script that I'm using that is causing me problems. Note: I have already run Enable-PSRemoting on the remote machine to allow it to accept incoming requests. I have tried to use a session option variable, but it doesn't seem to make any difference. $key = "HKLM:\SOFTWARE\Microsoft\PowerShell\1\ShellIds" Set-ItemProperty $key ConsolePrompting True $tvar = "password" $password = ConvertTo-SecureString -string $tvar -asPlainText –force $username="domain\username" $mySessionOption = New-PSSessionOption -NoEncryption $credential = New-Object System.Management.Automation.PSCredential($username,$password) invoke-command -filepath C:\scripts\RemoteScript.ps1 -sessionoption $mySessionOption -authentication digest -credential $credential -computername RemoteServer How do I enable unencrypted traffic?

    Read the article

  • Why does my PowerShell script hang when called in PSEXEC via a batch (.cmd) file?

    - by Kev
    I'm trying to remotely execute a PowerShell script using PSEXEC. The PowerShell script is called via a .cmd batch file. The reason we do this is to change the execution policy, run the powershell script then reset the execution policy again: On the remote server do-tasks.cmd looks like: powershell -command "&{ set-executionpolicy unrestricted}" powershell DoTasks.ps1 powershell -command "&{ set-executionpolicy restricted}" The PowerShell script DoTasks.ps1 just does this for now: Write-Output "Hello World!" Both of these scripts live in c:\windows\system32 (for now) just so they're on the PATH. On the originating server I do this: psexec \\web1928 -u administrator -p "adminpassword" do-tasks.cmd When this runs I get the following response at the command line: c:\Windows\system32>powershell -command "&{ set-executionpolicy unrestricted}" and the script runs no further. I can't ctrl-c to break the script and I just see ^C characters, I can type input from the keyboard and the characters are echoed to console. On the remote server I see that PowerShell.exe and CMD.exe are running in Task Manager's Process tab. If I end these processes then control returns to the command line on the originating server. I have tried this with just a simple .cmd batch file with a @echo hello world and it works just fine. Running do-tasks.cmd on the remote server via an RDP session works ok as well. Why is my remote batch file getting stuck when executing via PSEXEC?

    Read the article

  • Apache2, FastCGI, PHP-FPM, APC on virtualmin panel with nginx front end reverse proxy

    - by Ünsal Korkmaz
    My dream setup: php 5.3.6 + mysql 5.5.10 on Apache2, FastCGI, PHP-FPM, APC with nginx 1.0 front end reverse proxy. And as free server management panel: Virtualmin GPL on centos 5.6 In a new centos 5.6 setup. Using this code for installing virtualmin: wget http://software.virtualmin.com/gpl/scripts/install.sh chmod +x install.sh ./install.sh After setup, i see php is 5.1 and mysql is 5.0 version. And system not supporting php-fpm but supporting fcgid wrapper. I did following changes: wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1.0-6.ius.el5.noarch.rpm wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm rpm -Uvh ius-release*.rpm epel-release*.rpm yum install yum-plugin-replace yum remove mysql.i386 yum replace mysql --replace-with mysql55 service mysqld restart chkconfig mysqld on mysql_upgrade --password=1234 yum replace php --replace-with php53u yum install php53u-fpm php53u-pecl-apc service httpd restart chkconfig php-fpm on service php-fpm start I am not sure why virtualmin installing both mysql.i386 and 64 bit version together but needed to remove one of them for using yum replace. So i had php 5.3.6 + mysql 5.5.10 with PHP-FPM, APC installed. But virtualmin not supporting PHP-FPM + fastcgi and its still running on fcgid. I am ultra newbie on server management so i couldnt find workaround after this. I want to switch fcgid wrapper to PHP-FPM + fastcgi at least for 1 virtual server. And if i can find a fix for this section, i want to setup nginx 1.0 as front end reverse proxy for serving static files and passing php files to apache. http://nginxcp.com/ is what i want but its for cpanel.

    Read the article

  • How do I upgrade django on ubuntu 9.04?

    - by Lorin Hochstein
    I've got Django 1.0.2 installed on Ubuntu 9.04. I'd like to upgrade Django, because I have an app that needs Django 1.1 or greater. I tried using pip to do the upgrade, but got the following: $ sudo pip install Django==1.1 Downloading/unpacking Django==1.1 Downloading Django-1.1.tar.gz (5.6Mb): 5.6Mb downloaded Running setup.py egg_info for package Django Installing collected packages: Django Found existing installation: Django 1.0.2-final Not uninstalling Django at /var/lib/python-support/python2.6, outside environment /usr Running setup.py install for Django changing mode of build/scripts-2.6/django-admin.py from 644 to 755 changing mode of /usr/local/bin/django-admin.py to 755 Successfully installed Django It seems like it worked, but it refuses to remove the original Django 1.02, and sure enough: $ pip freeze | grep -i django Django==1.0.2-final django-debug-toolbar==0.8.3 django-sphinx==2.2.3 $ /usr/local/bin/django-admin.py --version 1.0.2 final The problem, apparently, is that pip won't uninstall files outside of /usr. I'd like to remove the existing Django files manually, but I have no idea how to do that, because I'm unfamiliar with how Python packages are laid out in Ubuntu. It looks pretty complicated. The site-packages directory is: $ python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()" /usr/lib/python2.6/dist-packages However, that's not where the django files live: $ ls -ld /usr/lib/python2.6/dist-packages/[Dd]jango* ls: cannot access /usr/lib/python2.6/dist-packages/[Dd]jango*: No such file or directory There's a /var/lib/python-support/python2.6/django directory, and the __init__.py file in that directory points to /usr/share/python-support/python-django/django/__init__.py. Clearly, pip is able to figure out where the files live. Is there any way to retrieve the list of files associated with the django package so I can just delete them manually?

    Read the article

  • error reading keytab file krb5.keytab

    - by Banjer
    I've noticed these kerberos keytab error messages on both SLES 11.2 and CentOS 6.3: sshd[31442]: pam_krb5[31442]: error reading keytab 'FILE: / etc/ krb5. keytab' /etc/krb5.keytab does not exist on our hosts, and from what I understand of the keytab file, we don't need it. Per this kerberos keytab introduction: A keytab is a file containing pairs of Kerberos principals and encrypted keys (these are derived from the Kerberos password). You can use this file to log into Kerberos without being prompted for a password. The most common personal use of keytab files is to allow scripts to authenticate to Kerberos without human interaction, or store a password in a plaintext file. This sounds like something we do not need and is perhaps better security-wise to not have it. How can I keep this error from popping up in our system logs? Here is my krb5.conf if its useful: banjer@myhost:~> cat /etc/krb5.conf # This file managed by Puppet # [libdefaults] default_tkt_enctypes = RC4-HMAC DES-CBC-MD5 DES-CBC-CRC default_tgs_enctypes = RC4-HMAC DES-CBC-MD5 DES-CBC-CRC preferred_enctypes = RC4-HMAC DES-CBC-MD5 DES-CBC-CRC default_realm = FOO.EXAMPLE.COM dns_lookup_kdc = true clockskew = 300 [logging] default = SYSLOG:NOTICE:DAEMON kdc = FILE:/var/log/kdc.log kadmind = FILE:/var/log/kadmind.log [appdefaults] pam = { ticket_lifetime = 1d renew_lifetime = 1d forwardable = true proxiable = false retain_after_close = false minimum_uid = 0 debug = false banner = "Enter your current" } Let me know if you need to see any other configs. Thanks. EDIT This message shows up in /var/log/secure whenever a non-root user logs in via SSH or the console. It seems to only occur with password-based authentication. If I do a key-based ssh to a server, I don't see the error. If I log in with root, I do not see the error. Our Linux servers authenticate against Active Directory, so its a hearty mix of PAM, samba, kerberos, and winbind that is used to authenticate a user.

    Read the article

  • Mac OS X 10.6 Setup for Apache/MySQL/Perl

    - by Russell C.
    I just got a new Mac and have been trying to setup a local development environment for my perl applications for a few days now with no luck. I'm getting no where fast so I hope someone else who has done this successfully could help. I started by installing MAMP which I thought would take care of everything for me but unfortunately it doesn't take care of some important perl modules. I used CPAN to install all our required modules except that it seems DBD::mysql doesn't install correctly through CPAN. After reading a lot online, lots of people reported problems with this and recommended using MacPorts to install the module which I have tried doing with no luck using the following command: sudo port install p5-dbd-mysql After what seems like a successful install of DBD::mysql, Apache continues to report the following error when trying to run any of our Perl scripts: [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] install_driver(mysql) failed: Can't locate DBD/mysql.pm in @INC (@INC contains: /Library/Perl/Updates/5.10.0/darwin-thread-multi-2level /Library/Perl/Updates/5.10.0 /System/Library/Perl/5.10.0/darwin-thread-multi-2level /System/Library/Perl/5.10.0 /Library/Perl/5.10.0/darwin-thread-multi-2level /Library/Perl/5.10.0 /Network/Library/Perl/5.10.0/darwin-thread-multi-2level /Network/Library/Perl/5.10.0 /Network/Library/Perl /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level /System/Library/Perl/Extras/5.10.0 .) at (eval 1835) line 3. [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] Perhaps the DBD::mysql perl module hasn't been fully installed, [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] or perhaps the capitalisation of 'mysql' isn't right. [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] Available drivers: DBM, ExampleP, File, Gofer, Proxy, SQLite, Sponge. I'm not sure where to go from here but my Mac isn't much of a development environment if Perl isn't able to talk to the database. I'd really appreciate any help and advice you might be able to provide in getting my system setup successfully. Thanks in advance!

    Read the article

  • Do we need to explicitly pass php.ini's location to php-fpm?

    - by F21
    I am seeing a strange issue where my php.ini is not used if I do not explicitly pass it to php-fpm when starting it. This is the upstart script I am using: start on (filesystem and net-device-up IFACE=lo) stop on runlevel [016] pre-start script mkdir -p /run/php end script expect fork respawn exec /usr/local/php/sbin/php-fpm --fpm-config /etc/php/php-fpm.conf If PHP is started with the above, my php.ini is never used, even though it is in Configuration File (php.ini) Path. This is the relevant part from phpinfo(): Configuration File (php.ini) Path /etc/php/ Loaded Configuration File (none) Scan this dir for additional .ini files (none) Additional .ini files parsed (none) If I modify the last line of the upstart script to point php-fpm to php.ini explicitly: exec /usr/local/php/sbin/php-fpm --fpm-config /etc/php/php-fpm.conf -c /etc/php/php.ini Then we see that the php.ini is loaded: Configuration File (php.ini) Path /etc/php/ Loaded Configuration File /etc/php/php.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) Why is this the case? Is this a quirk in php-fpm? Minor update: This also seems to be a problem for php5-fpm installed using apt-get. I did a test install in a Ubuntu Server 12.04 virtual machine by running the following: sudo apt-get install nginx php5-fpm PHP-FPM and nginx were started after installation and everything seemed fine. I then uncommented php's settings in nginx's configuration and placed a test phpinfo() file to inspect PHP's settings. The relevant bits are: Configuration File (php.ini) Path /etc/php5/fpm Loaded Configuration File (none) Scan this dir for additional .ini files /etc/php5/fpm/conf.d Additional .ini files parsed /etc/php5/fpm/conf.d/10-pdo.ini I noted that no php.ini was loaded either. However, if I go to /etc/php5/fpm, I can see that a php.ini exists. I also checked the start up scripts for PHP-FPM and the -c parameter was not used to link the ini file to PHP. This can potentially be confusing for people who would expect php.ini to be loaded automatically by PHP-FPM.

    Read the article

  • How to use Salt Stack with minions all behind NAT (not publicly accessible, default salt ports not open)?

    - by MountainX
    Can Salt Stack minions communicate with the salt master from behind NAT/Firewalls, etc., using standard ports that would be open be default in all consumer NAT routers (and without the minions having a public DNS record or static IP)? I'm working my way through my first salt tutorial, and this is where I'm stuck. I am able to configure iptables on the Ubuntu salt-master. But I have no control over the routers/NAT that the minions will sit behind. So far I tried these settings: /etc/salt/master: publish_port: 465 ret_port: 443 /etc/salt/minion: master_port: 465 That did not work. Background: I have a custom developed application presently running on about 40 Kubuntu laptops (& more planned). Every few months I have to update the application. (Often this just amounts to replacing a .jar file, which requires root permissions.) I also have to run Ubuntu updates and a few other minor things. I've been doing it manually, one by one, using Team Viewer to log into each client. I would like to dramatically improve this process. The two options I'm aware of are either: use reverse ssh tunnels and bash scripts. I tested this and it works. But I don't get any of the reporting, etc., I would get with Salt Stack. use Salt Stack (or similar) management tool. But I need a really simple tool. I can't invest any time in a big learning curve. I looked at Puppet and a bunch of related tools. The only one I found that looked simple enough for me (so far) was Salt Stack. But I'm stuck now because my minion can't reach the salt-master, as stated above. I appreciate suggestions.

    Read the article

  • Disable local delivery in Sendmail

    - by Luke P M
    I am using Sendmail on a Centos server to send email for PHP scripts, but the problem is that mail is delivered to a local mailbox on the machine rather than what is specified in the MX records for the domain - which actually point to another machine I use for email. I would like sendmail to not try and locally deliver mail for the domain the machine is setup for, is there a simple way to disable local delivery? The domain is not in the local-host-names file. I've already done lots of googling and I have looked at: http://serverfault.com/questions/26934/sendmail-configuration-to-not-deliver-mail-to-local-machine http://serverfault.com/questions/65365/disable-local-delivery-in-sendmail But either there is no answer or it is not suitable. I don't want to relay to another server, i just want it to send mail regardless of domain. To provide an example: I have two servers, one is the mail server at mail.example.com and a web server which is example.com, when I use the smtp service on the web server it currently routes mail to a local mailbox on example.com, but it should be going to mailboxes on mail.example.com Output of sendmail -bt returns: ADDRESS TEST MODE (ruleset 3 NOT automatically invoked) Enter 3,0 [email protected] canonify input: info @ example . com Canonify2 input: info Canonify2 returns: info canonify returns: info parse input: info Parse0 input: info Parse0 returns: info ParseLocal input: info ParseLocal returns: info Parse1 input: info Parse1 returns: $# local $: info parse returns: $# local $: info

    Read the article

  • How to execute msdb.dbo.sp_start_job from a stored procedure in user database in sql server 2005

    - by Ram
    Hi Everyone, I am trying to execute a msdb.dbo.sp_start_Job from MyDB.dbo.MyStoredProc in order to execute MyJob 1) I Know that if i give the user a SqlAgentUser role he will be able to run the jobs that he owns (BUT THIS IS WHAT I OBSERVED : THE USER WAS ABLE TO START/STOP/RESTART THE SQL AGENT SO I DO NOT WANT TO GO THIS ROUTE) - Let me know if i am wrong , but i do not understand why would such a under privileged user be able to start/stop agents . 2)I know that if i give execute permissions on executing user to msdb.dbo.Sp_Start_job and Enable Ownership chaining or enable Trustworthy on the user database it would work (BUT I DO NOT WANT TO ENABLE OWNERSHIP CHAINING NOR TRUSTWORTHY ON THE USER DATABASE) 3)I this this can be done by code signing User Database i)create a stored proc MyDB.dbo.MyStoredProc ii)Create a certificae job_exec iii)sign MyDB.dbo.MyStoredProc with certificate job_exec iv)export certificate msdb i)Import Certificate ii)create a derived user from this certificate iii)grant authenticate for this derived user iv)grant execute on msdb.dbo.sp_start_job to the derived user v)grant execute on msdb.dbo.sp_start_job to the user executing the MyDB.dbo.MyStoredProc but i tried it and it did not work for me -i dont know which piece i am missing or doing wrong so please provide me with a simple example (with scripts) for executing msdb.dbo.sp_start_job from user stored prod MyDB.dbo.MyStoredProc using code signing Many Many Many Thanks in Advance Thanks Ram

    Read the article

  • How do I host node.js apps with pm2 without running them as root?

    - by jishi
    I have setup pm2 to run a node.js application, and I can successfully start it and it will resurrect upon reboot. However, the pm2 daemon is ran as root, which makes me think that all my node-scripts also runs as root? Even though I added them as a regular user in the system. The log files and stuff is created in the users home dir, /~/.pm2/logs, but the logs are owned by root. when I invoke pm2 startup (which handles the installation of the init.d script etc), it creates /etc/init.d/pm2-init.sh which looks like this: #!/bin/bash # chkconfig: 2345 98 02 # # description: PM2 next gen process manager for Node.js # processname: pm2 # ### BEGIN INIT INFO # Provides: pm2 # Required-Start: # Required-Stop: # Should-Start: # Should-Stop: # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: PM2 init script # Description: PM2 is the next gen process manager for Node.js ### END INIT INFO NAME=pm2 PM2=/usr/local/lib/node_modules/pm2/bin/pm2 NODE=/usr/local/bin/node export HOME="/root" start() { echo "Starting $NAME" $NODE $PM2 stopAll $NODE $PM2 resurrect } stop() { $NODE $PM2 dump $NODE $PM2 stopAll } restart() { echo "Restarting $NAME" stop start } status() { echo "Status for $NAME:" $NODE $PM2 list RETVAL=$? } case "$1" in start) start ;; stop) stop ;; status) status ;; restart) restart ;; *) echo "Usage: {start|stop|status|restart}" exit 1 ;; esac exit $RETVAL When I dump the processes (which is what it will use when resurrecting the processes), I see mentions of user "USER":"pi" but I don't think that it's actually run as user pi. Any thoughts?

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >