Search Results

Search found 10622 results on 425 pages for 'shared hosting'.

Page 30/425 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Missing shared library for Rhythmbox

    - by user1450120
    After I upgraded from 13.04 to 13.10 my rhythmbox wouldn't work. After many failed attempts I ended up uninstalling and removing all traces of rhythmbox I could find. Now I've reinstalled rhythmbox, and am getting the error rhythmbox: error while loading shared libraries: librhythmbox-core.so.7: cannot open shared object file: No such file or directory I've tried sudo apt-get install librhythmbox* Only to get Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'librhythmbox-core5' for regex 'librhythmbox*' Note, selecting 'librhythmbox-core6' for regex 'librhythmbox*' Note, selecting 'librhythmbox-core7' for regex 'librhythmbox*' librhythmbox-core7 is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded. Any ideas on how to get rhythmbox back to a working state?

    Read the article

  • configuration issue with respect to .htaccess file on ubuntu

    - by Registered User
    I am building an application tshirtshop I have following configuration in /etc/apache2/sites-enabled/tshirtshop <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/tshirtshop <Directory /var/www/tshirtshop> Options Indexes FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> and following in .htaccess file in location /var/www/tshirtshop/.htaccess <IfModule mod_rewrite.c> # Enable mod_rewrite RewriteEngine On # Specify the folder in which the application resides. # Use / if the application is in the root. RewriteBase /tshirtshop #RewriteBase / # Rewrite to correct domain to avoid canonicalization problems # RewriteCond %{HTTP_HOST} !^www\.example\.com # RewriteRule ^(.*)$ http://www.example.com/$1 [R=301,L] # Rewrite URLs ending in /index.php or /index.html to / RewriteCond %{THE_REQUEST} ^GET\ .*/index\.(php|html?)\ HTTP RewriteRule ^(.*)index\.(php|html?)$ $1 [R=301,L] # Rewrite category pages RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2&Page=$3 [L] RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2 [L] # Rewrite department pages RewriteRule ^.*-d([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&Page=$2 [L] RewriteRule ^.*-d([0-9]+)/?$ index.php?DepartmentId=$1 [L] # Rewrite subpages of the home page RewriteRule ^page-([0-9]+)/?$ index.php?Page=$1 [L] # Rewrite product details pages RewriteRule ^.*-p([0-9]+)/?$ index.php?ProductId=$1 [L] </IfModule> the site is working on localhost and is working as if there is no .htaccess rule specified i.e. if I were to view a page as http://localhost/tshirtshop/nature-d2 then I get a 404 Error but if I view the same page as http://localhost/tshirtshop/index.php?DepartmentId=2 then I can view it. What is the mistake if any one can point out in above configuration, or else I need to check any thing else? sudo apache2ctl -M Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgi_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) reqtimeout_module (shared) rewrite_module (shared) setenvif_module (shared) status_module (shared) Syntax OK I am using Apache2 on Ubuntu 12.04

    Read the article

  • Godaddy VPS or swith to another provider?

    - by Charlie
    Long story short, I want to be able to have close to full control over my server, mainly being able to install things on my sever that Godaddy shared hosting does not allow (gzip, etc..). Should I switch providers to something else (all my domains are hosted there) or upgrade my hosting to VPS. If I upgrade, how hard will it be to set it up without their "assisted service" or something setup (where they do virus scanning, etc.)?

    Read the article

  • Library missing for executable file

    - by user1610406
    There's an executable I downloaded onto my Ubuntu 10.04 and I can't run because it's missing a library. I have also tried compiling the source with CMake. This is my Terminal output: zack@zack-laptop:~/Desktop$ ./MultiMC ./MultiMC: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such file or directory I think I need libssl 1.0 to run this file, but I'm not sure. Any help?

    Read the article

  • how to setup a webhosting site ?

    - by Thomas John
    Hi all, I have purchased a cPanel/WHM web hosting reseller account and I want to set up a site for people to set up a hosting accounts. I also would like to have a domain name registration system on the site, so people can register the domain name they would like to host with me. How can I do this? Are there any ready-made scripts available or should I create my own script using the WHM API? Thanks a lot.

    Read the article

  • How do I install libtiff.so.3?

    - by Osama Ahmaro
    I installed Maya 2011 on Ubuntu 10.04 many times and it was always working, Recently i upgraded my Ubuntu from 10.04 to 11.04, and when i tried to install Maya 2011 on it i wasn’t able to get it working. After installing Maya 2011 on Ubuntu 11.04 i try to run Maya from the terminal and this is what i get : error while loading shared libraries: libtiff.so.3: cannot open shared object file: No such file or directory. Please help. Note : Maya is a 3d software.

    Read the article

  • Premium Cpanel Accounts Give Away

    - by mamta
    I found Book My Cloud.com - Offering FREE Cpanel Hosting accounts Method to request is contacting their support team, Request Now: hxxp://support.book my cloud.com/index.php?a=add Select Request Type Free Cpanel Hosting Related They provide, Instant Activation Disk quota : 10 GB Monthly bandwidth : 300 GB Max FTP Accounts : 5 Max Email Accounts : Unlimited Max Email Lists : Unlimited Max Databases : 500 Max Sub Domains : 500 Max Parked Domains : 100 Max Addon Domains : 1000 Control Panel: Cpanel 2012

    Read the article

  • Outlook Shared Address book and contact not displaying

    - by user224061
    We have a shared Exchange addressbook with distribution email groups. When someone connects to the shared addressbook, composes an email to a group, the email distribution list is empty, then the distribution list is expanded. In troubleshooting, I noticed that when we expand the distribution list to view the recipients, most of the recipients are missing and only semicolons appear. CLICK HERE FOR IMAGE Further troubleshooting, I notice that when I open the distribution list with my Outlook client and click on the Update Now icon, and then go to create the email then when I expand the group the email addresses now appear. CLICK HERE FOR IMAGE Now, my Outlook profile is a cached profile. The shared contact list that I pulled the distribution list from is an online/non-cached shared contact list. What I also found is that if I switched my Outlook client to be online only(not cached) the share address book lists appear properly when expanded. Is there any way to make this list appear correctly without having to click on update now for each and every distribution list in the shared contacts list we have on the server? I would really prefer that every time one wants to use this shared contact list, they do not have o click the update not button or switch from cached mode to make this work. T.I.A

    Read the article

  • What kind of online hosting do I need for a WCF-based service?

    - by mafutrct
    First of all, I'm not sure if SO is the right place to ask. Please migrate me if needed. I would like to host a WCF-based service so it is available for everyone. While hosting it on my personal, local servers succeeded, I would prefer to move it to an external service provider for various reasons. I'll be blunt: I have no clue about hosting providers. I know there are webhosters, virtual and root servers and several other services. What I would like to know is what kind of hosting I need in my case. I understand that a root server would easily fulfill my requirements, but that is not exactly cheap. The program I'd like to run on the server requires .NET 4, preferably on a windows machine. Access to a folder in the file system is much appreciated (1 GB storage is enough by far). Communication with clients (in form of an applications written in .NET) via opening a port on the server. Traffic is low (<<1 GB/month?) There is no website. Having the provider perform updates would be nice. My understanding is that a virtual server would be a possible solution. Prices seem start at around 5€/month, which is ok for me. However, I read that for these cheap solutions RAM is severely limited (~400 MB), and I'm not confident that is enough to run windows and a .NET application.

    Read the article

  • What is the point of having a key_t if what will be the key to access shared memory is the return value of shmget()?

    - by devoured elysium
    When using shared memory, why should we care about creating a key key_t ftok(const char *path, int id); in the following bit of code? key_t key; int shmid; key = ftok("/home/beej/somefile3", 'R'); shmid = shmget(key, 1024, 0644 | IPC_CREAT); From what I've come to understand, what is needed to access a given shared memory is the shmid, not the key. Or am I wrong? If what we need is the shmid, what is the point in not just creating a random key every time? Edit @link text one can read: What about this key nonsense? How do we create one? Well, since the type key_t is actually just a long, you can use any number you want. But what if you hard-code the number and some other unrelated program hardcodes the same number but wants another queue? The solution is to use the ftok() function which generates a key from two arguments. Reading this, it gives me the impression that what one needs to attach to a shared-memory block is the key. But this isn't true, is it? Thanks

    Read the article

  • Facebook shared links not opening (Redirecting correctly) [on hold]

    - by Hammad
    I have been in a problem and after hours of searching find nothing useful to counter it. I have a website, and that website has RSS feed attached to facebook page. As I post the content to website it also appears on facebook page. But I have people complaining on my page that my posts don't open when they click the link to read the details. Since I am a Chrome user and didn't notice this happening for months. But as I checked it through firefox and Internet Explorer, I found the links shared actually don't open with these browswers. They only work in Chrome, means they are properly redirected in chrome browser and not in firefox and IE. Whenever I click on links of posts on my page through IE or firefox the url does not simply redirect to my website and I get to see nothing as if I am not connecting to Internet. When examining URL I see this: http://www.facebook.com/l.php?u=http%3A%2F%2Fbit.ly%2F112NVO9&h=EAQHDgTUv&s=1 Which shows that facebook is not redirecting the links properly. Moreover I also use link shortening service bit.ly to shorten my shared links. I have checked same problem exists even if I don't shorten my links. And I checked I am not alone, even the tech giant website mashable.com links also don't open in IE and FF from facebook, they only open in Chrome. From mobile phone the links don't open (redirected properly) even in chrome. Can anyone tell me what is the issue? Nothing much is written about it on Internet as no once has faced this problem. P.S: I have checked from different systems, the problem persists.

    Read the article

  • Shared pointers causing weird behaviour

    - by Setzer22
    I have the following code in SFML 2.1 Class ResourceManager: shared_ptr<Sprite> ResourceManager::getSprite(string name) { shared_ptr<Texture> texture(new Texture); if(!texture->loadFromFile(resPath+spritesPath+name)) throw new NotSuchFileException(); shared_ptr<Sprite> sprite(new Sprite(*texture)); return sprite; } Main method: (I'll omit most of the irrelevant code shared_ptr<Sprite> sprite = ResourceManager::getSprite("sprite.png"); ... while(renderWindow.isOpen()) renderWindow.draw(*sprite); Oddly enough this makes my sprite render completely white, but if I do this instead: shared_ptr<Sprite> ResourceManager::getSprite(string name) { Texture* texture = new Texture; // <------- From shared pointer to pointer if(!texture->loadFromFile(resPath+spritesPath+name)) throw new NotSuchFileException(); shared_ptr<Sprite> sprite(new Sprite(*texture)); return sprite; } It works perfectly. So what's happening here? I assumed the shared pointer would work just as a pointer. Could it be that it's getting deleted? My main method is keeping a reference to it so I don't really understand what's going on here :S EDIT: I'm perfectly aware deleting the sprite won't delete the texture and this is generating a memory leak I'd have to handle, that's why I'm trying to use smart pointers on the first place...

    Read the article

  • WebCenter Content shared folders for clustering

    - by Kyle Hatlestad
    When configuring a WebCenter Content (WCC) cluster, one of the things which makes it unique from some other WebLogic Server applications is its requirement for a shared file system.  This is actually not any different then 10g and previous versions of UCM when it ran directly on a JVM.  And while it is simple enough to say it needs a shared file system, there are some crucial details in how those directories are configured. And if they aren't followed, you may result in some unwanted behavior. This blog post will go into the details on how exactly the file systems should be split and what options are required. Beyond documents being stored on the file system and/or database and metadata being stored in the database along with other structured data, there is other information being read and written to on the file system.  Information such as user profile preferences, workflow item state information, metadata profiles, and other details are stored in files.  In addition, for certain processes within WCC, each of the nodes needs to know what the other nodes are doing so they don’t step on each other.  WCC keeps track of this through the use of lock files on the file system.  Because of this, each node of the WCC must have access to the same file system just as they have access to the same database. WCC uses its own locking mechanism using files, so it also needs to have access to those files without file attribute caching and without locking being done by the client (node).  If one of the nodes accesses a certain status file and it happens to be cached, that node might attempt to run a process which another node is already working on.  Or if a particular file is locked by one of the node clients, this could interfere with access by another node.  Unfortunately, when disabling file attribute caching on the file share, this can impact performance.  So it is important to only disable caching and locking on the particular folders which require it.  When configuring WebCenter Content after deploying the domain, it asks for 3 different directories: Content Server Instance Folder, Native File Repository Location, and Weblayout Folder.  And starting in PS5, it now asks for the User Profile Folder. Even if you plan on storing the content in the database, you still need to establish a Native File (Vault) and Weblayout directories.  These will be used for handling temporary files, cached files, and files used to deliver the UI. For these directories, the only folder which needs to have the file attribute caching and locking disabled is the ‘Content Server Instance Folder’.  So when establishing this share through NFS or a clustered file system, be sure to specify those options. For instance, if creating the share through NFS, use the ‘noac’ and ‘nolock’ options for the mount options. For the other directories, caching and locking should be enabled to provide best performance to those locations.   These directory path configurations are contained within the <domain dir>\ucm\cs\bin\intradoc.cfg file: #Server System PropertiesIDC_Id=UCM_server1 #Server Directory Variables IdcHomeDir=/u01/fmw/Oracle_ECM1/ucm/idc/ FmwDomainConfigDir=/u01/fmw/user_projects/domains/base_domain/config/fmwconfig/ AppServerJavaHome=/u01/jdk/jdk1.6.0_22/jre/ AppServerJavaUse64Bit=true IntradocDir=/mnt/share_no_cache/base_domain/ucm/cs/ VaultDir=/mnt/share_with_cache/ucm/cs/vault/ WeblayoutDir=/mnt/share_with_cache/ucm/cs/weblayout/ #Server Classpath variables #Additional Variables #NOTE: UserProfilesDir is only available in PS5 – 11.1.1.6.0UserProfilesDir=/mnt/share_with_cache/ucm/cs/data/users/profiles/ In addition to these folder configurations, it’s also recommended to move node-specific folders to local disk to avoid unnecessary traffic to the shared directory.  So on each node, go to <domain dir>\ucm\cs\bin\intradoc.cfg and add these additional configuration entries: VaultTempDir=<domain dir>/ucm/<cs>/vault/~temp/ TraceDirectory=<domain dir>/servers/<UCM_serverN>/logs/EventDirectory=<domain dir>/servers/<UCM_serverN>/logs/event/ And of course, don’t forget the cluster-specific configuration values to add as well.  These can be added through Admin Server -> General Configuration -> Additional Configuration Variables or directly in the <IntradocDir>/config/config.cfg file: ArchiverDoLocks=true DisableSharedCacheChecking=true ServiceAllowRetry=true    (use only with Oracle RAC Database)PublishLockTimeout=300000  (time can vary depending on publishing time and number of nodes) For additional information and details on clustering configuration, I highly recommend reviewing document [1209496.1] on the support site.  In addition, there is a great step-by-step guide on setting up a WebCenter Content cluster [1359930.1].

    Read the article

  • Which hosting will let me execute my own EXE with PHP?

    - by guitar-
    I have a task that PHP (or any server-side scripting language) isn't practical for. It involves a lot of file I/O, processing, etc. and it will execute a lot faster using the program I made in C instead of PHP. Do any hosts allow you to upload your own EXE files and run them on the server using PHP's exec, shell_exec, etc. functions? Do you need a dedicated server to do this? Also, I don't know if Facebook's PHP HipHop is out yet, but I really don't want to use that.

    Read the article

  • Network printer - Print direct or via shared printer on Server?

    - by NickC
    It has occurred to me that a workstation can connect to a printer in two ways: 1). Printing directly to the IP of the printer with the print driver installed locally. 2). Printing to a \Server\Printer1 share with the print queue residing on the server. Question is which way is preferred? I would assume that printing directly to a network printer rather than going through the server would be the most efficient from the point of view of network traffic. On the other hand I guess a server printer share would be easier to manage with the correct driver automatically being downloaded to the workstations. Also what about using GPP (Server2012) to install this printer on the workstations, does that require any specific way?

    Read the article

  • How to configure my hosting provider for email when DNS redirection already in place?

    - by Johann
    First of all I apologize for my poor understanding of network and I hope I used the correct terminology for everything: I have bought a domain name with 1&1 to create an online shop, using kingeshop. I followed the steps explains here to point the domain name to kingeshop IP address and everything is working fine. Now I would also like to use my domain name to create an email account. But 1&1 seems to say they're not responsible for this now the DNS is transfered to kingeshop's. What I'm wondering is who is really "in charge" of my domain name now? 1&1 also lets me configure the MX-record but I don't have a clue what I can do with this. Could I just use an existing gmail address somehow? Thanks a lot for any help or clarity you can bring me on that matter!

    Read the article

  • Hosting a web site at home: ISP Blocking port 80?

    - by tombull89
    Hello, I presume this is a better place to put this rather than server fault. I'm interested in setting up a small site to host at home as a "proof of concept" exercise, i.e. to prrove that I know how to do it. I've got a (virtual) server 2003 machine with a site on it, all configured with port forwarding through to 80 on my server. I have a Belkin F5D7634 which I have put my DYNDNS details in but when I try to go to my DYNDNS address it comes up with the page cannot be displayed. My ISP is Carphone Warehouse/AOL and I've been unable to find any information if they block port 80. If they do, can anybody reccomend a home provider that does not block port 80? Regards, Tom.

    Read the article

  • A good file management/hosting/storage web service with embed-function?

    - by Andreas
    I am looking for a file management web service that lets me integrate the directory-view into a commercial website. Another requirement: User can register themselves, but need to be approved, before being able to download files. So something like box.net, but with more than just a a flash-widget. I would prefer some javascript, that can be embedded. Thanks for any recommendations.

    Read the article

  • Help me hosting Apache server on a LAN connected computers?

    - by akhilesh
    i have build a JSP project,but i really want that other computer connected to my computer can access by website.i have never done it before Please help me.Now my server can be accessed using http://localhost:8080 on my local machine.what are the configuration i need to do please tell me.please post some link or step by step help.

    Read the article

  • How to secure Apache for shared hosting environment? (chrooting, avoid symlinking...)

    - by Alessio Periloso
    I'm having problems dealing with Apache configuration: the problem is that I want to limit each user to his own docroot (so, a chroot() would be what I'm looking for), but: Mod_chroot works only globally and not for each virtualhost: i have the users in a path like the following one /home/vhosts/xxxxx/domains/domain.tld/public_html (xxxxx is the user), and can't solve the problem chrooting /home/vhosts, because the users would still be allowed to see each other. Using apache-mod-itk would slow down the websites too much, and I'm not sure if it would solve anything Without using any of the previous two, I think the only thing left is avoiding symlinking, not allowing the users to link to something that doesn't belong to them. So, I think I'm going to follow the third point but... how to efficiently avoid symlinking while still keeping mod_rewrite working?! The php has already been chrooted with php-fpm, so my only concern is about Apache itself.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >