Search Results

Search found 22633 results on 906 pages for 'service accounts'.

Page 383/906 | < Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >

  • ADSL improvement in recent years

    - by cleong
    Currently I have a 2mb/s ADSL connection. I signed up for the service more than five years ago. Has technology improved much during that time to allow for greater speed using the same wires? The building I live in is quite old and the lines aren't very good. They weren't able to support 6mb/s service back then. Now I notice that the lowest speed offered by my telco is 10mb/s. Even that would be a serious improvement over what I have now. Here are the stats from the modem: Line Attenuation (Up/Down) [dB]: 10,5 / 15,5 SN Margin (Up/Down) [dB]: 31,5 / 29,0

    Read the article

  • Unable chage IP address for eth0 without restart in Ubunto

    - by Rodnower
    I have Ubuntu 12.04.1 installed. I try to change IP address of the interface eth0 in /etc/network/interfaces from 192.168.1.3 to 192.168.1.4: auto lo iface lo inet loopback pre-up iptables-restore < /etc/iptables.up.rules auto eth0 iface eth0 inet static address 192.168.1.4 gateway 192.168.1.1 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 sudo service networking status Now I issue: sudo service networking restart I have response: stop: Unknown instance: networking stop/waiting And IP remains 192.168.1.3: eth0 Link encap:Ethernet HWaddr 00:1e:33:71:cd:a4 inet addr:192.168.1.3 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21e:33ff:fe71:cda4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3861 errors:0 dropped:0 overruns:0 frame:0 TX packets:3291 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3423285 (3.4 MB) TX bytes:521854 (521.8 KB) Interrupt:45 Base address:0x4000 Only after restart IP changing... Any ideas?

    Read the article

  • Managing client passwords [closed]

    - by C.Johns
    I am just starting up a small website development business and one of the issues I am having is remembering passwords and account information for clients hosting, cpanel, ftp accounts etc. I was wondering what is the most suitable system / industry standard for controlling such information? Pretty marginal on the close there... I read the FAQ and I felt list this could be a common issue for webmasters, its defiantly not a coding questions so stackoverflow is out of the question and its not a broad question its focused on one particular aspect of being a webmaster.

    Read the article

  • AdWords traffic not (properly) reflected in Analytics

    - by CJM
    I have an AdWords account, which was set to use Auto-tagging of URLs. When looking at the Analytics account for that site, I couldn't find any reference to AdWords traffic either in the Advertising section or the Traffic Sources section. So I manually constructed the URL tags, and updated the Campaign Ad. Once the ad was approved and the clicks started coming through again, I could see the results in the Traffic Sources section of Analytics. In the Sources Campaigns section, my campaign was listed, and under Sources All Traffic, it was registering the same level of traffic from google/adwords. However, the Advertising AdWords section is still drawing a blank. Any ideas? Are there explicit steps needed to enable full tracking of AdWords campaigns? If it is relevant, the Adwords campaign was set up with one account, and the Analytics tracking with another, but both accounts have full access to both AdWords and Analytics.

    Read the article

  • Change AccountName/LoginName for a SharePoint User (SPUser)

    - by Rohit Gupta
    Consider the following: We have an account named MYDOMAIN\eholz. This accounts Active Directory Login Name changes to MYDOMAIN\eburrell Now this user was a active user in a Sharepoint 2010 team Site, and had a userProfile using the Account name MYDOMAIN\eholz. Since the AD LoginName changed to eburrell hence we need to update the Sharepoint User (SPUser object) as well update the userprofile to reflect the new account name. To update the Sharepoint User LoginName we can run the following stsadm command on the Server: STSADM –o migrateuser –oldlogin MYDOMAIN\eholz –newlogin MYDOMAIN\eburrell –ignoresidhistory However to update the Sharepoint 2010 UserProfile, i first tried running a Incremental/Full Synchronization using the User Profile Synchronization service… this did not work. To enable me to update the AccountName field (which is a read only field) of the UserProfile, I had to first delete the User Profile for MYDOMAIN\eholz and then run a FULL Synchronization using the User Profile Synchronization service which synchronizes the Sharepoint User Profiles with the AD profiles. Update: if you just run the STSADM –o migrateuser command… the profile also gets updated automatically. so all you need is to run the stsadm –o migrate user command and you dont need to delete and recreate the User Profile

    Read the article

  • Cloud storage provider lost my data. How to back up next time?

    - by tomcam
    What do you do when cloud storage fails you? First, some background. A popular cloud storage provider (rhymes with Booger Link) damaged a bunch of my data. Getting it back was an uphill battle with all the usual accusations that it was my fault, etc. Finally I got the data back. Yes, I can back this up with evidence. Idiotically, I stayed with them, so I totally get that the rest of this is on me. The problem had been with a shared folder that works with all 12 computers my business and family use with the service. We'll call that folder the Tragic Briefcase. It is a sort of global folder that's publicly visible to all computers on the service. It's our main repository. Today I decided to deal with some residual effects of the Crash of '11. Part of the damage they did was that in just one of my computers (my primary, of course) all the documents in the Tragic Briefcase were duplicated in the Windows My Documents folder. I finally started deleting them. But guess what. Though they appeared to be duplicated in the file system, removing them from My Documents on the primary PC caused them to disappear from the Tragic Briefcase too. They efficiently disappeared from all the other computers' Tragic Briefcases as well. So now, 21 gigs of files are gone, and of course I don't know which ones. I want to avoid this in the future. Apart from using a different storage provider, the bigger picture is this: how do I back up my cloud data? A complete backup every week or so from web to local storage would cause me to exceed my ISP's bandwidth. Do I need to back up each of my 12 PCs locally? I do use Backupify for my primary Google Docs, but I have been storing taxes, confidential documents, Photoshop source, video source files, and so on using the web service. So it's a lot of data, but I need to keep it safe. Backup locally would also mean 2 backup drives or some kind of RAID per PC, right, because you can't trust a single point of failure? Assuming I move to DropBox or something of its ilk, what is the best way to make sure that if the next cloud storage provider messes up I can restore?

    Read the article

  • Share folder with active directory group permissions

    - by Hihui
    I have a Debian as a member of our AD (which is a 2k3). I want to share 2 folders from our Debian. 1 with full access for everyone, the second only readable by group "ADM", and "PROD". Part of smb.conf: [global] workgroup = MYDOMAIN realm = MYDOMAIN.LOCAL netbios name = SERV-FTP wins server = "IP serv 2k3" security = domain [JUKEBOX] // full access path = /media/JUKEBOX/JUKEBOX comment = sharing writable = yes browsable = yes public = yes read only = no valid users = @ASYLUM\prod_std admin users = @ASYLUM\ADM [SOFTWARE] comment = Software path = /media/JUKEBOX/SOFTWARE valid users = @ASYLUM\prod_adv, @ASYLUM\ADM writable = yes read only = no My log : [2013/10/25 09:24:37.316643, 0] smbd/service.c:1055(make_connection_snum) canonicalize_connect_path failed for service SOFTWARE, path /media/JUKEBOX/SOFTWARE And, from my Windows's client, if i want to access on that folder : Windows can't access to \serv-ftp\software Where is the problem ... ? Thx !

    Read the article

  • CSF Unresolved issue

    - by josephmarhee
    I began receiving service failures for CSF/LFD once the limit was reached in iptables preventing the service from working properly. I flushed all iptables rules, and redid by rules using CIDR rather than the individual IPs that were listed and the issue persists. Error: The VPS iptables rule limit (numiptent) is too low (1527/1536) - stopping firewall to prevent iptables blocking all connections, at line 1459 This is after restarting CSF, which gave me: You have an unresolved error when starting csf. You need to restart csf successfully to remove this warning CSF still seems to be trying to enforce rules that no longer exists (lists entire chains upon trying to be restarted,only to fail with that error). Any idea of what's going on?

    Read the article

  • CentOS 6 init script doesn't work properly

    - by user711643
    I'm setting up my ruby production server based on CentOS 6. I need a process called god (which is a process monitoring tool) to start at boot. I'm using an init script that I found here. Just as stated in the guide I ran: chkconfig --add god and then chkconfig --level 345 god on After this if I run "service god start|restart" everything works. It loads the available configurations and brings up the related processes (if they are not running). Problem is it doesn't work at boot. If I reboot the system, then I do "ps -aux | grep god". At this point "god" is running but apparently it didn't load the configuration files. If i run again service god restart, it loads everything without problems. What am I doing wrong?

    Read the article

  • Minimum rights to access the whole Users directory on another computer

    - by philipthegreat
    What is the minimum rights required to access the Users directory on another computer via an admin share? I have a batch file that writes some information to a few other computers using a path of \\%COMPUTERNAME%\c$\Users\%USERNAME%\AppData\Roaming. The batch files run under an unprivileged user (part of Domain Users only). How do I set appropriate rights so that service account can access the AppData\Roaming folder for every user on another computer? I'd like to give rights lower than Local Admin, which I know will work. Things I've attempted: As Domain Admin, attempted to give Modify rights to the C:\Users\ directory on the local computer. Error: Access Denied. Set the service account as Local Admin on the other computer. This works, but is against IT policy where I work. I'd like to accomplish this with rights lower than Local Admin. Any suggestions?

    Read the article

  • Will 5 Terabyte NAS drive be compatible with Windows XP SP3 32 bit?

    - by TrevorBoydSmith
    (NOTE: The operating system (in this case Windows XP SP3 32 bit) we are using is not a choice.) I am trying to setup a short term storage device. First, I found a large 5 Terabyte NAS drive that would IMO fulfill my storage requirements. Second, I also found that Windows XP seems to have a hard drive size limit (see 'Is there a limit to the size of a hard drive for Windows XP pre-SP1?'): XP should handle up to 2 TB per volume after the service packs are applied. You are correct. There was a 137gb limit on the orginal pre service pack windows xp. This was addressed/fixed in SP1. My question is, will my Windows XP SP3 32 bit machine see the 5 Terabyte NAS and be able to read/write properly to the NAS drive?

    Read the article

  • Password Authentication Problems

    - by Bobby Hathorn
    I am new to Ubuntu, am extremely delighted with the performance and speed, as compared to Windows 7-However, I messed up, I think...when I booted my USB disc, I set a password, as directed, and when Ubuntu booted up I tried to reset my password via User Accounts to "None". Now, the Password Authentication window prevents me from downloading software, (Audacity and my Ubuntu updates. Also, I've tried to boot into GRUB and the Recovery Console, as directed; however, the PC bypasses GRUB and boots into Ubuntu instead. Also, when attempting to use the terminal as directed to change the password, I'm given a password prompt there also. If the problem is on my end, could you email/reset my password? My PC is an emachines EL1358G. I am otherwise happy with Ubuntu!

    Read the article

  • Empathy sametime client id

    - by user91860
    I have been using Pidgin as an all-in-one IM client, but now as Empathy is the default app in Ubuntu I wanted to try it out. I have a sametime account at an external company that is keen to restrict access to their service from certain sametime client versions only. I was able to trick it with pidgin by specifying the following settings in accounts.xml: <setting name='client_minor' type='int'>8510</setting> <setting name='client_major' type='int'>30</setting> <setting name='client_id_val' type='int'>4876</setting> I tried to do the same in Empathy but I failed. As far as I know, Pidgin and Empathy use the same connector Plugin for sametime, so basically the functions should be there somewhere but there is little information about the configuration files and that doesn't discuss any sametime specific options.

    Read the article

  • Redirecting email from from domain registrar to hosting comapny mailbox

    - by jmoreno
    I have the domain example.com registered with company A I have the hosting with company B (ServerGrove) Company A offers me simple parking, and company B offers me mailbox service. What I would like is to use the hosting company mail service. How do I configure the DNS records in company A to be forwarded to company B mailbox? I think I have to add a MX record to company A's configuration, and then the same in company B's DNS records, is this correct? I think I'm mixing concepts, and cannot see a clear solution, I've tried several configurations but all failed. Any help would be appreciated. Regards.

    Read the article

  • nat with iptables, re-connecting fail within about 3 minutes

    - by xpu
    i constructed a nat with iptables, as following: echo 1 > /proc/sys/net/ipv4/ip_forward iptables -t nat -I PREROUTING -p tcp --dport 9000 -j DNAT --to xx.xx.xx.xx iptables -t nat -I POSTROUTING -p tcp --dport 9000 -j MASQUERADE service iptables save service iptables restart the configuration worked fine, but there was a problem when i disconnected and tried to reconnect again, connection will be refused within about 2~3 minutes, after that, things go fine again. what was the problem? how do i make it to accept new connection after the old one break?

    Read the article

  • Partner Webcast - Oracle WebCenter: Portal Highlights - 31 Oct 2013

    - by Roxana Babiciu
    Oracle WebCenter is the center of engagement for business. In order to succeed in today’s economy, organizations need to engage with information across all channels to ensure customers, partners and employees have access to the right information in the context of the business process in which they are engaged. The latest release of Oracle WebCenter addresses this challenge with updates across its complete portfolio. Nowadays, Portals are multi-channel applications that enable the creation, sharing and distribution of personalized content, as well as access to social networking and self-service capabilities. Web 2.0 and social technologies have already transformed the ways customers, employees, partners, and suppliers communicate and stay informed. The new release of Oracle WebCenter Portal makes it easier and faster for business users to create intuitive portals with integrated application content Streamlining development with an integrated set of tools for web and mobile. Providing out-of-the box templates for common use cases. Expediting the portal creation experience with new development tools empower business users to build and deploy mobile portals and websites with unprecedented speed—without having to wait for IT which leads to a shorter time to market and reduced costs. Join us to discover a Web platform that allows organizations to quickly and easily create intranets, extranets, composite applications, and self-service portals, providing users a more secure and efficient way of consuming information and interacting with applications, processes, and other users – the latest Oracle WebCenter Portal release 11gR1 PS7. Read more here

    Read the article

  • Can Camel Jetty proxy URL share the webapplication context

    - by user1750353
    I am struggling with Camel Jetty Proxy Routes. At times, the routes exhibits inconsistent behavior. My Proxy app deployed with context root "Proxy", however, if give that as the context path for my proxy URL I get service not found error. If change the Proxy to an arbitrary context such as "Dummy" then the route works. Is that how camel jetty component works? From Path: jetty:http://0.0.0.0:6080/Proxy/PurchaseOrder/?matchOnUriPrefix=true&disableStreamCache=true&traceEnabled=true To Path: jetty:http://localhost:7001/Provider/PurchaseOrder/?bridgeEndpoint=true&throwExceptionOnFailure=false Another issue i noticed with jetty is If deploy both Proxy and Provider on the same app container(same listening port), then the route completely stops working saying "Provider/PurchaseOrder/" service not found. The only way the routes work is, both routes have to run different ports and the from route shouldn't share the webcontext path. I have requirement that if required I should be a able run both Proxy and Provider on the same container. Any help appreciated. Thanks,

    Read the article

  • Searching Objects on SonicWALL (NSA 2600, SonicOS 6.1)

    - by Justin Scott
    Ok, this may sound like a dumb question, but does the SonicOS web interface not have a search option for object definitions? One of my clients recently decided to replace their Astaro Security Gateways with SonicWALL firewalls. These sit in front of a small data center full of servers and we have several hundred custom service and address definitions that need to be ported. The SonicOS interface provides a basic list for service and address definitions but no search option to be found. To make it worse, there is no option to list them all on one page (they're paginated 50 at a time) so I can't use the text search on the web browser either. The Astaro units have a nice search option on their definitions so perhaps I just got spoiled by their software. Am I missing something or is there some way to search for an object without paging through the list and finding an entry manually?

    Read the article

  • Securing a Cloud-Based Data Center

    - by Orgad Kimchi
    No doubt, with all the media reports about stolen databases and private information, a major concern when committing to a public or private cloud must be preventing unauthorized access of data and applications. In this article, we discuss the security features of Oracle Solaris 11 that provide a bullet-proof cloud environment. As an example, we show how the Oracle Solaris Remote Lab implementation utilizes these features to provide a high level of security for its users. Note: This is the second article in a series on cloud building with Oracle Solaris 11. See Part 1 here.  When we build a cloud, the following aspects related to the security of the data and applications in the cloud become a concern: • Sensitive data must be protected from unauthorized access while residing on storage devices, during transmission between servers and clients, and when it is used by applications. • When a project is completed, all copies of sensitive data must be securely deleted and the original data must be kept permanently secure. • Communications between users and the cloud must be protected to prevent exposure of sensitive information from “man in a middle attacks.” • Limiting the operating system’s exposure protects against malicious attacks and penetration by unauthorized users or automated “bots” and “rootkits” designed to gain privileged access. • Strong authentication and authorization procedures further protect the operating system from tampering. • Denial of Service attacks, whether they are started intentionally by hackers or accidentally by other cloud users, must be quickly detected and deflected, and the service must be restored. In addition to the security features in the operating system, deep auditing provides a trail of actions that can identify violations,issues, and attempts to penetrate the security of the operating system. Combined, these threats and risks reinforce the need for enterprise-grade security solutions that are specifically designed to protect cloud environments. With Oracle Solaris 11, the security of any cloud is ensured. This article explains how.

    Read the article

  • How to build an API on top of an existing Rails app with NodeJs and what architecture to use?

    - by javiayala
    The explanation I was recently hired by a company that has an old RoR 2.3 application with more than 100k users, a strong SEO strategy with more than 170k indexed urls, native android and ios applications and other custom-made mobile and web applications that rely on a not so good API from the same RoR app. They recently merged with a company from another country as an strategy to grow the business and the profit. They have almost the same stats, a similar strategy and mobile apps. We have just decided that we need to merge the data from both companies and to start a new app from scratch since the RoR app is to old and heavily patched and the app from the other company was built with a custom PHP framework without any documentation. The only good news is that both databases are in MySQL and have a similar structure. The challenge I need to build a new version that: can handle a lot of traffic, preserves the SEO strategies of both companies, serve 2 different domains, and have a strong API that can support legacy mobile apps from both companies and be ready for a new set of native apps. I want to use RoR 3.2 for the main web apps and NodeJs with a Restful API. I know that I need to be very careful with the mobile apps and handle multiple versions of the API. I also think that I need to create a service that can handle a lot IO request since the apps is heavily used to create orders for restaurants at a certain time of the day. The questions With all this in mind: What type of architecture do you recommend me to follow? What gems or node packages do you think will work the best? How do I build a new rails app and keep using the same database structure? Should I use NodeJS to build an API or just build a new service with Ruby? I know that I'm asking to much from you guys, but please help me by answering any topic that you can or by pointing me on the right direction. All your comments and feedback will be extremely appreciated! Thanks!

    Read the article

  • Can not start Apache 2.2.22 in Fedora 15

    - by Roderik
    I am trying to start Apache 2.2.22 under Fedora 15 on my local machine. After fixing some errors related to missing modules, httpd -t will just give me 'Syntax OK'. However when I try to start apache as the root user: service httpd start it still returns: Starting httpd (via systemctl): Job failed. See system logs and 'systemctl status' for details. [FAILED] When entering systemctl I don't see any extra information other than: httpd.service loaded failed failed LSB: start and stop Apache HTTP Server So I wonder where to look now to get this back up and running.

    Read the article

  • Created Custom Report in Google Analytics, Primary Account Doesn't See It?

    - by Anagio
    A client shared access with me to their Google Analytics account. I created a custom report which shows up under Custom Reporting for me. I assumed they would also see this report since it was in their account but they sent me a screen shot showing there's no custom report listed. I have already sent them the shortcut link to the custom report configuration. This seems to be the way to share custom reports along with dashboards in GA now. Do custom reports only appear to the accounts (email) that created them? I would think everyone who had access to the account would see the custom report.

    Read the article

  • Allowing client to select data to return via REST interface

    - by CMP
    I have a rest service that is essentially a proxy to a variety of other services. So if I call GET /users/{id} It will get their user profile, as well as order history, and contact info, etc... all from various services, and aggregates them into one nice object. My problem is that each call to a different service has the potential to add time to the original request, so we would rather not get ALL the data ALL of the time if a particular client does not care about all of the pieces. A solution I have arrived at is to do something like this: GET /users/{id}?includeOrders=true&includeX=true&includeY=true... That works, and it allow me to do only what I need to, but it is cumbersome. We have added enough different data sources that there are too many parameters for that style to be useful. I could do something similar with a single integer and a bitmask or something, but that only makes it harder to read, and it does not feel very Restful. I could break it down into multiple calls so they would need to call /users/{id}/orders and /users/{id}/profile separately, but that sort of defeats the purpose of an aggregating proxy, who's purpose is to make clients jobs easier. Are there any good patterns that can help me return just enough data for each client, without making it too difficult for them to filter and select what they want?

    Read the article

  • Remote connection issue with Sql Server 2005 with SMS and Services but not IIS

    - by Mallioch
    Here is the situation: I have a Server 2008 box that is trying to connect to a Sql Server 2005 instance. Connections from websites running in the context of IIS work fine to the Sql Server machine using Sql Server authentication. Rockin'. However, using the same connection string, I cannot get a windows service on the same box to communicate with the Sql Server. Nor can I get management studio to connect from the same box. IIS great, other options no so much. For grins I have tried monkeying with the user accounts in the IIS app pools to match that of the service to get the sites to break and that hasn't worked, so it doesn't appear to be a user account issue. Since this is happening with two different programs and not with IIS, I'm assuming there is something shut down on the Sql Server that needs to allow non-IIS connecting things to communicate, but I have no idea what that would be. Any help would be appreciated.

    Read the article

  • Restricting A Directory Through .htaccess

    - by Whitechapel
    I'm trying to put all of my FTP accounts into a folder on /public_html/ftp and password protect it so search bots can't crawl their private files. I'm also trying to redirect all site traffic from the non-www to www. I keep getting 500 errors when accessing the site, and I need to point it to www.vivalanation.com/ftp to www.vivalanation.com/ftp/, because the /ftp just errors out, you need the trailing slash. Here is my .htaccess in the /public_html/ftp folder: RewriteEngine on RewriteBase / RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] AuthName "FTP Access" AuthType Basic AuthUserFile /home1/vivalst/.htpasswds/public_html/ftp/passwd Require valid-user I created a passwd file in /.htpasswds/public_html/ftp And here is my basic .htaccess in the root of /public_html/: RewriteEngine on RewriteBase / RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L]

    Read the article

< Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >