Search Results

Search found 26126 results on 1046 pages for 'generic service contract'.

Page 418/1046 | < Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >

  • Are you able to specify a the profile you want to use in pfexec?

    - by jigjig
    Are you able to specify which profile you want to use for a given user when using pfexec who has been assigned multiple profiles? One example for this use is so that we can execute a command as a different user within the same process. In exec_attr, you are able to specify the uid/gid that will be used to execute a particular command as in the following example entry: Name Service Security:suser:cmd:::/usr/sbin/rpc.nsid:uid=0;gid=0 The above profile will use the super user (uid=0) to execute the rpc.nsid command. In user_attr, you can specify multiple profiles as below: testuser::::type=normal;profiles=Name Service Security,Object Access Management Can you then specify directly to use the Object Access Management profile to pfexec?

    Read the article

  • cPanel IPTables custom rules

    - by James Haigh
    Hi, I'm trying to allow a host access to port 3306 by IP. I've added the rule and ran an iptables-save and also service iptables save. These commands show as "OK" with no reported errors. And this works absolutely fine. Now, the server hasn't been restarted at all since I've been having this problem, but every day when I start developing on the server that needs mySQL access, it reports that the connection is refused. Back on the mySQL server, all I need to do is service iptables restart and everything then works as normal. The mySQL server is a CentOS cPanel VPS running on OpenVZ. Anyone know how I can make these rules persist? Is it something cPanel is doing overnight that is messing with my config? Thanks.

    Read the article

  • systemctl (Fedora 17) and interacting spawned processes's consoles

    - by Sean
    Introduction I've recently upgraded to Fedora 17 and I'm getting used to the newer systemctl daemon manager versus shell init scripts. A feature I need on some of my daemons is the ability to interact with their consoles because unclean shutdowns not initiated by the process itself can cause database corruption. So, performing a systemctl stop service-name.service for example might cause irreversible data loss. These consoles read user input through stdin or similar methods, so what I've been doing on my old OS is to place those daemons foregrounded in a screen session, and I suspended that screen session with ^A ^z. It's also worth noting that I've now made systemctl do this automatically if the computer reboots, but it still doesn't solve my potential data corruption problem I'm trying to avoid. My Question Is there a way to use systemctl in order to directly interact with the console of processes it spawns? Can I hook a process through systemctl to get access to its console? Thanks You guys always give great answers, so I'm turning to you!

    Read the article

  • Share folder with active directory group permissions

    - by Hihui
    I have a Debian as a member of our AD (which is a 2k3). I want to share 2 folders from our Debian. 1 with full access for everyone, the second only readable by group "ADM", and "PROD". Part of smb.conf: [global] workgroup = MYDOMAIN realm = MYDOMAIN.LOCAL netbios name = SERV-FTP wins server = "IP serv 2k3" security = domain [JUKEBOX] // full access path = /media/JUKEBOX/JUKEBOX comment = sharing writable = yes browsable = yes public = yes read only = no valid users = @ASYLUM\prod_std admin users = @ASYLUM\ADM [SOFTWARE] comment = Software path = /media/JUKEBOX/SOFTWARE valid users = @ASYLUM\prod_adv, @ASYLUM\ADM writable = yes read only = no My log : [2013/10/25 09:24:37.316643, 0] smbd/service.c:1055(make_connection_snum) canonicalize_connect_path failed for service SOFTWARE, path /media/JUKEBOX/SOFTWARE And, from my Windows's client, if i want to access on that folder : Windows can't access to \serv-ftp\software Where is the problem ... ? Thx !

    Read the article

  • Does a receiving mail server (the ultimate destination) see emails delivered directly to it vs. to an external relay which then forwards them to it?

    - by Matt
    Let's say my users have accounts on some mail server mail.example.com. I currently have my mx record set to mail.example.com and all is good. Now let's say I want to have mails initially delivered to an external service (e.g. Postini. Note that this is not a postini-specific question though). In the normal situation where my mx is set directly to my mail server mail.example.com, sending MTAs will of course look up my MX and send to mail.example.com. In my new situation I'd have my mx set to mx.othermailservice.com and emails would be received there. OtherEmailService.com will then relay the emails (while keeping the return-path header the same) to mail.example.com. Do the emails that are received at mail.example.com after be relayed from the other service "look" any different than emails that go directly to it as would be the case where the mx was set to mail.example.com?

    Read the article

  • Unable chage IP address for eth0 without restart in Ubunto

    - by Rodnower
    I have Ubuntu 12.04.1 installed. I try to change IP address of the interface eth0 in /etc/network/interfaces from 192.168.1.3 to 192.168.1.4: auto lo iface lo inet loopback pre-up iptables-restore < /etc/iptables.up.rules auto eth0 iface eth0 inet static address 192.168.1.4 gateway 192.168.1.1 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 sudo service networking status Now I issue: sudo service networking restart I have response: stop: Unknown instance: networking stop/waiting And IP remains 192.168.1.3: eth0 Link encap:Ethernet HWaddr 00:1e:33:71:cd:a4 inet addr:192.168.1.3 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21e:33ff:fe71:cda4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3861 errors:0 dropped:0 overruns:0 frame:0 TX packets:3291 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3423285 (3.4 MB) TX bytes:521854 (521.8 KB) Interrupt:45 Base address:0x4000 Only after restart IP changing... Any ideas?

    Read the article

  • CSF Unresolved issue

    - by josephmarhee
    I began receiving service failures for CSF/LFD once the limit was reached in iptables preventing the service from working properly. I flushed all iptables rules, and redid by rules using CIDR rather than the individual IPs that were listed and the issue persists. Error: The VPS iptables rule limit (numiptent) is too low (1527/1536) - stopping firewall to prevent iptables blocking all connections, at line 1459 This is after restarting CSF, which gave me: You have an unresolved error when starting csf. You need to restart csf successfully to remove this warning CSF still seems to be trying to enforce rules that no longer exists (lists entire chains upon trying to be restarted,only to fail with that error). Any idea of what's going on?

    Read the article

  • ADSL improvement in recent years

    - by cleong
    Currently I have a 2mb/s ADSL connection. I signed up for the service more than five years ago. Has technology improved much during that time to allow for greater speed using the same wires? The building I live in is quite old and the lines aren't very good. They weren't able to support 6mb/s service back then. Now I notice that the lowest speed offered by my telco is 10mb/s. Even that would be a serious improvement over what I have now. Here are the stats from the modem: Line Attenuation (Up/Down) [dB]: 10,5 / 15,5 SN Margin (Up/Down) [dB]: 31,5 / 29,0

    Read the article

  • CentOS 6 init script doesn't work properly

    - by user711643
    I'm setting up my ruby production server based on CentOS 6. I need a process called god (which is a process monitoring tool) to start at boot. I'm using an init script that I found here. Just as stated in the guide I ran: chkconfig --add god and then chkconfig --level 345 god on After this if I run "service god start|restart" everything works. It loads the available configurations and brings up the related processes (if they are not running). Problem is it doesn't work at boot. If I reboot the system, then I do "ps -aux | grep god". At this point "god" is running but apparently it didn't load the configuration files. If i run again service god restart, it loads everything without problems. What am I doing wrong?

    Read the article

  • Will 5 Terabyte NAS drive be compatible with Windows XP SP3 32 bit?

    - by TrevorBoydSmith
    (NOTE: The operating system (in this case Windows XP SP3 32 bit) we are using is not a choice.) I am trying to setup a short term storage device. First, I found a large 5 Terabyte NAS drive that would IMO fulfill my storage requirements. Second, I also found that Windows XP seems to have a hard drive size limit (see 'Is there a limit to the size of a hard drive for Windows XP pre-SP1?'): XP should handle up to 2 TB per volume after the service packs are applied. You are correct. There was a 137gb limit on the orginal pre service pack windows xp. This was addressed/fixed in SP1. My question is, will my Windows XP SP3 32 bit machine see the 5 Terabyte NAS and be able to read/write properly to the NAS drive?

    Read the article

  • Cloud storage provider lost my data. How to back up next time?

    - by tomcam
    What do you do when cloud storage fails you? First, some background. A popular cloud storage provider (rhymes with Booger Link) damaged a bunch of my data. Getting it back was an uphill battle with all the usual accusations that it was my fault, etc. Finally I got the data back. Yes, I can back this up with evidence. Idiotically, I stayed with them, so I totally get that the rest of this is on me. The problem had been with a shared folder that works with all 12 computers my business and family use with the service. We'll call that folder the Tragic Briefcase. It is a sort of global folder that's publicly visible to all computers on the service. It's our main repository. Today I decided to deal with some residual effects of the Crash of '11. Part of the damage they did was that in just one of my computers (my primary, of course) all the documents in the Tragic Briefcase were duplicated in the Windows My Documents folder. I finally started deleting them. But guess what. Though they appeared to be duplicated in the file system, removing them from My Documents on the primary PC caused them to disappear from the Tragic Briefcase too. They efficiently disappeared from all the other computers' Tragic Briefcases as well. So now, 21 gigs of files are gone, and of course I don't know which ones. I want to avoid this in the future. Apart from using a different storage provider, the bigger picture is this: how do I back up my cloud data? A complete backup every week or so from web to local storage would cause me to exceed my ISP's bandwidth. Do I need to back up each of my 12 PCs locally? I do use Backupify for my primary Google Docs, but I have been storing taxes, confidential documents, Photoshop source, video source files, and so on using the web service. So it's a lot of data, but I need to keep it safe. Backup locally would also mean 2 backup drives or some kind of RAID per PC, right, because you can't trust a single point of failure? Assuming I move to DropBox or something of its ilk, what is the best way to make sure that if the next cloud storage provider messes up I can restore?

    Read the article

  • Partner Webcast - Oracle WebCenter: Portal Highlights - 31 Oct 2013

    - by Roxana Babiciu
    Oracle WebCenter is the center of engagement for business. In order to succeed in today’s economy, organizations need to engage with information across all channels to ensure customers, partners and employees have access to the right information in the context of the business process in which they are engaged. The latest release of Oracle WebCenter addresses this challenge with updates across its complete portfolio. Nowadays, Portals are multi-channel applications that enable the creation, sharing and distribution of personalized content, as well as access to social networking and self-service capabilities. Web 2.0 and social technologies have already transformed the ways customers, employees, partners, and suppliers communicate and stay informed. The new release of Oracle WebCenter Portal makes it easier and faster for business users to create intuitive portals with integrated application content Streamlining development with an integrated set of tools for web and mobile. Providing out-of-the box templates for common use cases. Expediting the portal creation experience with new development tools empower business users to build and deploy mobile portals and websites with unprecedented speed—without having to wait for IT which leads to a shorter time to market and reduced costs. Join us to discover a Web platform that allows organizations to quickly and easily create intranets, extranets, composite applications, and self-service portals, providing users a more secure and efficient way of consuming information and interacting with applications, processes, and other users – the latest Oracle WebCenter Portal release 11gR1 PS7. Read more here

    Read the article

  • Minimum rights to access the whole Users directory on another computer

    - by philipthegreat
    What is the minimum rights required to access the Users directory on another computer via an admin share? I have a batch file that writes some information to a few other computers using a path of \\%COMPUTERNAME%\c$\Users\%USERNAME%\AppData\Roaming. The batch files run under an unprivileged user (part of Domain Users only). How do I set appropriate rights so that service account can access the AppData\Roaming folder for every user on another computer? I'd like to give rights lower than Local Admin, which I know will work. Things I've attempted: As Domain Admin, attempted to give Modify rights to the C:\Users\ directory on the local computer. Error: Access Denied. Set the service account as Local Admin on the other computer. This works, but is against IT policy where I work. I'd like to accomplish this with rights lower than Local Admin. Any suggestions?

    Read the article

  • nat with iptables, re-connecting fail within about 3 minutes

    - by xpu
    i constructed a nat with iptables, as following: echo 1 > /proc/sys/net/ipv4/ip_forward iptables -t nat -I PREROUTING -p tcp --dport 9000 -j DNAT --to xx.xx.xx.xx iptables -t nat -I POSTROUTING -p tcp --dport 9000 -j MASQUERADE service iptables save service iptables restart the configuration worked fine, but there was a problem when i disconnected and tried to reconnect again, connection will be refused within about 2~3 minutes, after that, things go fine again. what was the problem? how do i make it to accept new connection after the old one break?

    Read the article

  • Change AccountName/LoginName for a SharePoint User (SPUser)

    - by Rohit Gupta
    Consider the following: We have an account named MYDOMAIN\eholz. This accounts Active Directory Login Name changes to MYDOMAIN\eburrell Now this user was a active user in a Sharepoint 2010 team Site, and had a userProfile using the Account name MYDOMAIN\eholz. Since the AD LoginName changed to eburrell hence we need to update the Sharepoint User (SPUser object) as well update the userprofile to reflect the new account name. To update the Sharepoint User LoginName we can run the following stsadm command on the Server: STSADM –o migrateuser –oldlogin MYDOMAIN\eholz –newlogin MYDOMAIN\eburrell –ignoresidhistory However to update the Sharepoint 2010 UserProfile, i first tried running a Incremental/Full Synchronization using the User Profile Synchronization service… this did not work. To enable me to update the AccountName field (which is a read only field) of the UserProfile, I had to first delete the User Profile for MYDOMAIN\eholz and then run a FULL Synchronization using the User Profile Synchronization service which synchronizes the Sharepoint User Profiles with the AD profiles. Update: if you just run the STSADM –o migrateuser command… the profile also gets updated automatically. so all you need is to run the stsadm –o migrate user command and you dont need to delete and recreate the User Profile

    Read the article

  • Allowing client to select data to return via REST interface

    - by CMP
    I have a rest service that is essentially a proxy to a variety of other services. So if I call GET /users/{id} It will get their user profile, as well as order history, and contact info, etc... all from various services, and aggregates them into one nice object. My problem is that each call to a different service has the potential to add time to the original request, so we would rather not get ALL the data ALL of the time if a particular client does not care about all of the pieces. A solution I have arrived at is to do something like this: GET /users/{id}?includeOrders=true&includeX=true&includeY=true... That works, and it allow me to do only what I need to, but it is cumbersome. We have added enough different data sources that there are too many parameters for that style to be useful. I could do something similar with a single integer and a bitmask or something, but that only makes it harder to read, and it does not feel very Restful. I could break it down into multiple calls so they would need to call /users/{id}/orders and /users/{id}/profile separately, but that sort of defeats the purpose of an aggregating proxy, who's purpose is to make clients jobs easier. Are there any good patterns that can help me return just enough data for each client, without making it too difficult for them to filter and select what they want?

    Read the article

  • Redirecting email from from domain registrar to hosting comapny mailbox

    - by jmoreno
    I have the domain example.com registered with company A I have the hosting with company B (ServerGrove) Company A offers me simple parking, and company B offers me mailbox service. What I would like is to use the hosting company mail service. How do I configure the DNS records in company A to be forwarded to company B mailbox? I think I have to add a MX record to company A's configuration, and then the same in company B's DNS records, is this correct? I think I'm mixing concepts, and cannot see a clear solution, I've tried several configurations but all failed. Any help would be appreciated. Regards.

    Read the article

  • How to build an API on top of an existing Rails app with NodeJs and what architecture to use?

    - by javiayala
    The explanation I was recently hired by a company that has an old RoR 2.3 application with more than 100k users, a strong SEO strategy with more than 170k indexed urls, native android and ios applications and other custom-made mobile and web applications that rely on a not so good API from the same RoR app. They recently merged with a company from another country as an strategy to grow the business and the profit. They have almost the same stats, a similar strategy and mobile apps. We have just decided that we need to merge the data from both companies and to start a new app from scratch since the RoR app is to old and heavily patched and the app from the other company was built with a custom PHP framework without any documentation. The only good news is that both databases are in MySQL and have a similar structure. The challenge I need to build a new version that: can handle a lot of traffic, preserves the SEO strategies of both companies, serve 2 different domains, and have a strong API that can support legacy mobile apps from both companies and be ready for a new set of native apps. I want to use RoR 3.2 for the main web apps and NodeJs with a Restful API. I know that I need to be very careful with the mobile apps and handle multiple versions of the API. I also think that I need to create a service that can handle a lot IO request since the apps is heavily used to create orders for restaurants at a certain time of the day. The questions With all this in mind: What type of architecture do you recommend me to follow? What gems or node packages do you think will work the best? How do I build a new rails app and keep using the same database structure? Should I use NodeJS to build an API or just build a new service with Ruby? I know that I'm asking to much from you guys, but please help me by answering any topic that you can or by pointing me on the right direction. All your comments and feedback will be extremely appreciated! Thanks!

    Read the article

  • Can Camel Jetty proxy URL share the webapplication context

    - by user1750353
    I am struggling with Camel Jetty Proxy Routes. At times, the routes exhibits inconsistent behavior. My Proxy app deployed with context root "Proxy", however, if give that as the context path for my proxy URL I get service not found error. If change the Proxy to an arbitrary context such as "Dummy" then the route works. Is that how camel jetty component works? From Path: jetty:http://0.0.0.0:6080/Proxy/PurchaseOrder/?matchOnUriPrefix=true&disableStreamCache=true&traceEnabled=true To Path: jetty:http://localhost:7001/Provider/PurchaseOrder/?bridgeEndpoint=true&throwExceptionOnFailure=false Another issue i noticed with jetty is If deploy both Proxy and Provider on the same app container(same listening port), then the route completely stops working saying "Provider/PurchaseOrder/" service not found. The only way the routes work is, both routes have to run different ports and the from route shouldn't share the webcontext path. I have requirement that if required I should be a able run both Proxy and Provider on the same container. Any help appreciated. Thanks,

    Read the article

  • Searching Objects on SonicWALL (NSA 2600, SonicOS 6.1)

    - by Justin Scott
    Ok, this may sound like a dumb question, but does the SonicOS web interface not have a search option for object definitions? One of my clients recently decided to replace their Astaro Security Gateways with SonicWALL firewalls. These sit in front of a small data center full of servers and we have several hundred custom service and address definitions that need to be ported. The SonicOS interface provides a basic list for service and address definitions but no search option to be found. To make it worse, there is no option to list them all on one page (they're paginated 50 at a time) so I can't use the text search on the web browser either. The Astaro units have a nice search option on their definitions so perhaps I just got spoiled by their software. Am I missing something or is there some way to search for an object without paging through the list and finding an entry manually?

    Read the article

  • Securing a Cloud-Based Data Center

    - by Orgad Kimchi
    No doubt, with all the media reports about stolen databases and private information, a major concern when committing to a public or private cloud must be preventing unauthorized access of data and applications. In this article, we discuss the security features of Oracle Solaris 11 that provide a bullet-proof cloud environment. As an example, we show how the Oracle Solaris Remote Lab implementation utilizes these features to provide a high level of security for its users. Note: This is the second article in a series on cloud building with Oracle Solaris 11. See Part 1 here.  When we build a cloud, the following aspects related to the security of the data and applications in the cloud become a concern: • Sensitive data must be protected from unauthorized access while residing on storage devices, during transmission between servers and clients, and when it is used by applications. • When a project is completed, all copies of sensitive data must be securely deleted and the original data must be kept permanently secure. • Communications between users and the cloud must be protected to prevent exposure of sensitive information from “man in a middle attacks.” • Limiting the operating system’s exposure protects against malicious attacks and penetration by unauthorized users or automated “bots” and “rootkits” designed to gain privileged access. • Strong authentication and authorization procedures further protect the operating system from tampering. • Denial of Service attacks, whether they are started intentionally by hackers or accidentally by other cloud users, must be quickly detected and deflected, and the service must be restored. In addition to the security features in the operating system, deep auditing provides a trail of actions that can identify violations,issues, and attempts to penetrate the security of the operating system. Combined, these threats and risks reinforce the need for enterprise-grade security solutions that are specifically designed to protect cloud environments. With Oracle Solaris 11, the security of any cloud is ensured. This article explains how.

    Read the article

  • Remote connection issue with Sql Server 2005 with SMS and Services but not IIS

    - by Mallioch
    Here is the situation: I have a Server 2008 box that is trying to connect to a Sql Server 2005 instance. Connections from websites running in the context of IIS work fine to the Sql Server machine using Sql Server authentication. Rockin'. However, using the same connection string, I cannot get a windows service on the same box to communicate with the Sql Server. Nor can I get management studio to connect from the same box. IIS great, other options no so much. For grins I have tried monkeying with the user accounts in the IIS app pools to match that of the service to get the sites to break and that hasn't worked, so it doesn't appear to be a user account issue. Since this is happening with two different programs and not with IIS, I'm assuming there is something shut down on the Sql Server that needs to allow non-IIS connecting things to communicate, but I have no idea what that would be. Any help would be appreciated.

    Read the article

  • why is this happening?-"dhcpcd will not work correctly unless run as root"

    - by user330317
    i have installed archlinux and gnome on virtualbox. had no problem connecting to internet but now after installing gnome and rebooting there is no internet connection after following instructions from archwiki,i have tried . but i cant figure out the problem please help. host-63drhd% sudo netctl status enp0s3 ? netctl@enp0s3.service - Networking for netctl profile enp0s3 Loaded: loaded (/usr/lib/systemd/system/[email protected]; static) Active: inactive (dead) Docs: man:netctl.profile(5) host-63drhd% sudo netctl enable enp0s3 Profile 'enp0s3' does not exist or is not readable host-63drhd% sudo dhcpcd dhcpcd[1486]: sending commands to master dhcpcd process host-63drhd% dhcpcd dhcpcd[1543]: control_open: Permission denied dhcpcd[1543]: dhcpcd will not work correctly unless run as root dhcpcd[1543]: open `/run/dhcpcd.pid': Permission denied dhcpcd[1543]: control_start: Permission denied dhcpcd[1543]: version 6.3.2 starting dhcpcd[1543]: enp0s3: if_init: Permission denied dhcpcd[1543]: enp0s8: if_init: Permission denied dhcpcd[1543]: no valid interfaces found dhcpcd[1543]: no interfaces have a carrier dhcpcd[1543]: forked to background, child pid 1544

    Read the article

  • Can not start Apache 2.2.22 in Fedora 15

    - by Roderik
    I am trying to start Apache 2.2.22 under Fedora 15 on my local machine. After fixing some errors related to missing modules, httpd -t will just give me 'Syntax OK'. However when I try to start apache as the root user: service httpd start it still returns: Starting httpd (via systemctl): Job failed. See system logs and 'systemctl status' for details. [FAILED] When entering systemctl I don't see any extra information other than: httpd.service loaded failed failed LSB: start and stop Apache HTTP Server So I wonder where to look now to get this back up and running.

    Read the article

  • Hosting solution for sensitive client data

    - by Mark
    Hello, We are developing a web application that will deal with highly sensitive (financial) data of clients (audience is medium to large sized businesses). Clients will be under scrutiny from regulators & auditors and, as such, we will be too. More importantly to give clients a level of comfort our application and related hosting arrangement should instill a lot of confidence with them. We are looking into using a cloud based service like Linode, Amazon EC2, etc. To allow for maximum flexibility We are keen on putting everything on virtual servers and avoiding having to buy our own hardware. Does a cloud based service make sense for our particular scenario? If not what type of hosting should we consider? If so what should we look out for? Thanks!

    Read the article

< Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >