Search Results

Search found 7020 results on 281 pages for 'shared ptr'.

Page 11/281 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • How to secure Apache for shared hosting environment? (chrooting, avoid symlinking...)

    - by Alessio Periloso
    I'm having problems dealing with Apache configuration: the problem is that I want to limit each user to his own docroot (so, a chroot() would be what I'm looking for), but: Mod_chroot works only globally and not for each virtualhost: i have the users in a path like the following one /home/vhosts/xxxxx/domains/domain.tld/public_html (xxxxx is the user), and can't solve the problem chrooting /home/vhosts, because the users would still be allowed to see each other. Using apache-mod-itk would slow down the websites too much, and I'm not sure if it would solve anything Without using any of the previous two, I think the only thing left is avoiding symlinking, not allowing the users to link to something that doesn't belong to them. So, I think I'm going to follow the third point but... how to efficiently avoid symlinking while still keeping mod_rewrite working?! The php has already been chrooted with php-fpm, so my only concern is about Apache itself.

    Read the article

  • Maintaining shared service in ASP.NET MVC Application

    - by kazimanzurrashid
    Depending on the application sometimes we have to maintain some shared service throughout our application. Let’s say you are developing a multi-blog supported blog engine where both the controller and view must know the currently visiting blog, it’s setting , user information and url generation service. In this post, I will show you how you can handle this kind of case in most convenient way. First, let see the most basic way, we can create our PostController in the following way: public class PostController : Controller { public PostController(dependencies...) { } public ActionResult Index(string blogName, int? page) { BlogInfo blog = blogSerivce.FindByName(blogName); if (blog == null) { return new NotFoundResult(); } IEnumerable<PostInfo> posts = postService.FindPublished(blog.Id, PagingCalculator.StartIndex(page, blog.PostPerPage), blog.PostPerPage); int count = postService.GetPublishedCount(blog.Id); UserInfo user = null; if (HttpContext.User.Identity.IsAuthenticated) { user = userService.FindByName(HttpContext.User.Identity.Name); } return View(new IndexViewModel(urlResolver, user, blog, posts, count, page)); } public ActionResult Archive(string blogName, int? page, ArchiveDate archiveDate) { BlogInfo blog = blogSerivce.FindByName(blogName); if (blog == null) { return new NotFoundResult(); } IEnumerable<PostInfo> posts = postService.FindArchived(blog.Id, archiveDate, PagingCalculator.StartIndex(page, blog.PostPerPage), blog.PostPerPage); int count = postService.GetArchivedCount(blog.Id, archiveDate); UserInfo user = null; if (HttpContext.User.Identity.IsAuthenticated) { user = userService.FindByName(HttpContext.User.Identity.Name); } return View(new ArchiveViewModel(urlResolver, user, blog, posts, count, page, achiveDate)); } public ActionResult Tag(string blogName, string tagSlug, int? page) { BlogInfo blog = blogSerivce.FindByName(blogName); if (blog == null) { return new NotFoundResult(); } TagInfo tag = tagService.FindBySlug(blog.Id, tagSlug); if (tag == null) { return new NotFoundResult(); } IEnumerable<PostInfo> posts = postService.FindPublishedByTag(blog.Id, tag.Id, PagingCalculator.StartIndex(page, blog.PostPerPage), blog.PostPerPage); int count = postService.GetPublishedCountByTag(tag.Id); UserInfo user = null; if (HttpContext.User.Identity.IsAuthenticated) { user = userService.FindByName(HttpContext.User.Identity.Name); } return View(new TagViewModel(urlResolver, user, blog, posts, count, page, tag)); } } As you can see the above code heavily depends upon the current blog and the blog retrieval code is duplicated in all of the action methods, once the blog is retrieved the same blog is passed in the view model. Other than the blog the view also needs the current user and url resolver to render it properly. One way to remove the duplicate blog retrieval code is to create a custom model binder which converts the blog from a blog name and use the blog a parameter in the action methods instead of the string blog name, but it only helps the first half in the above scenario, the action methods still have to pass the blog, user and url resolver etc in the view model. Now lets try to improve the the above code, first lets create a new class which would contain the shared services, lets name it as BlogContext: public class BlogContext { public BlogInfo Blog { get; set; } public UserInfo User { get; set; } public IUrlResolver UrlResolver { get; set; } } Next, we will create an interface, IContextAwareService: public interface IContextAwareService { BlogContext Context { get; set; } } The idea is, whoever needs these shared services needs to implement this interface, in our case both the controller and the view model, now we will create an action filter which will be responsible for populating the context: public class PopulateBlogContextAttribute : FilterAttribute, IActionFilter { private static string blogNameRouteParameter = "blogName"; private readonly IBlogService blogService; private readonly IUserService userService; private readonly BlogContext context; public PopulateBlogContextAttribute(IBlogService blogService, IUserService userService, IUrlResolver urlResolver) { Invariant.IsNotNull(blogService, "blogService"); Invariant.IsNotNull(userService, "userService"); Invariant.IsNotNull(urlResolver, "urlResolver"); this.blogService = blogService; this.userService = userService; context = new BlogContext { UrlResolver = urlResolver }; } public static string BlogNameRouteParameter { [DebuggerStepThrough] get { return blogNameRouteParameter; } [DebuggerStepThrough] set { blogNameRouteParameter = value; } } public void OnActionExecuting(ActionExecutingContext filterContext) { string blogName = (string) filterContext.Controller.ValueProvider.GetValue(BlogNameRouteParameter).ConvertTo(typeof(string), Culture.Current); if (!string.IsNullOrWhiteSpace(blogName)) { context.Blog = blogService.FindByName(blogName); } if (context.Blog == null) { filterContext.Result = new NotFoundResult(); return; } if (filterContext.HttpContext.User.Identity.IsAuthenticated) { context.User = userService.FindByName(filterContext.HttpContext.User.Identity.Name); } IContextAwareService controller = filterContext.Controller as IContextAwareService; if (controller != null) { controller.Context = context; } } public void OnActionExecuted(ActionExecutedContext filterContext) { Invariant.IsNotNull(filterContext, "filterContext"); if ((filterContext.Exception == null) || filterContext.ExceptionHandled) { IContextAwareService model = filterContext.Controller.ViewData.Model as IContextAwareService; if (model != null) { model.Context = context; } } } } As you can see we are populating the context in the OnActionExecuting, which executes just before the controllers action methods executes, so by the time our action methods executes the context is already populated, next we are are assigning the same context in the view model in OnActionExecuted method which executes just after we set the  model and return the view in our action methods. Now, lets change the view models so that it implements this interface: public class IndexViewModel : IContextAwareService { // More Codes } public class ArchiveViewModel : IContextAwareService { // More Codes } public class TagViewModel : IContextAwareService { // More Codes } and the controller: public class PostController : Controller, IContextAwareService { public PostController(dependencies...) { } public BlogContext Context { get; set; } public ActionResult Index(int? page) { IEnumerable<PostInfo> posts = postService.FindPublished(Context.Blog.Id, PagingCalculator.StartIndex(page, Context.Blog.PostPerPage), Context.Blog.PostPerPage); int count = postService.GetPublishedCount(Context.Blog.Id); return View(new IndexViewModel(posts, count, page)); } public ActionResult Archive(int? page, ArchiveDate archiveDate) { IEnumerable<PostInfo> posts = postService.FindArchived(Context.Blog.Id, archiveDate, PagingCalculator.StartIndex(page, Context.Blog.PostPerPage), Context.Blog.PostPerPage); int count = postService.GetArchivedCount(Context.Blog.Id, archiveDate); return View(new ArchiveViewModel(posts, count, page, achiveDate)); } public ActionResult Tag(string blogName, string tagSlug, int? page) { TagInfo tag = tagService.FindBySlug(Context.Blog.Id, tagSlug); if (tag == null) { return new NotFoundResult(); } IEnumerable<PostInfo> posts = postService.FindPublishedByTag(Context.Blog.Id, tag.Id, PagingCalculator.StartIndex(page, Context.Blog.PostPerPage), Context.Blog.PostPerPage); int count = postService.GetPublishedCountByTag(tag.Id); return View(new TagViewModel(posts, count, page, tag)); } } Now, the last thing where we have to glue everything, I will be using the AspNetMvcExtensibility to register the action filter (as there is no better way to inject the dependencies in action filters). public class RegisterFilters : RegisterFiltersBase { private static readonly Type controllerType = typeof(Controller); private static readonly Type contextAwareType = typeof(IContextAwareService); protected override void Register(IFilterRegistry registry) { TypeCatalog controllers = new TypeCatalogBuilder() .Add(GetType().Assembly) .Include(type => controllerType.IsAssignableFrom(type) && contextAwareType.IsAssignableFrom(type)); registry.Register<PopulateBlogContextAttribute>(controllers); } } Thoughts and Comments?

    Read the article

  • Shared Library Issues In Linux

    <b>Innovations:</b> "Shared libraries are one of the many strong design features of Linux, but can lead to headaches for inexperienced users, and even experienced users in certain situations."

    Read the article

  • 12 Steps to NTFS Shared Folders in Windows Server 2012

    - by KeithMayer
    In the past, managing and sharing NTFS folders could be a real ordeal – there were different tools for managing NTFS permissions vs shared folders and most IT Pros generally used these tools on a server-by-server basis from each server’s console. Server Manager to the rescue! In Windows Server 2012, Server Manager provides a management facelift on top of the disconnected process that we’ve used in the past for sharing folders and setting NTFS permissions. In addition, Server Manager can

    Read the article

  • Problem in shared folder

    - by alsadi90
    I followed the steps for sharing folders between windows 7 and Ubuntu in virtual box. but the folder appear with X sign and give me the following message when open it "the folder conent could not be displayed" and when choose "shared folder" from "Device" menu the following is written below "on the system page , you have asigned more than 50% of your computer's memory (2.93) to the virtual machine ...

    Read the article

  • Is it possible to share a C struct in shared memory between apps compiled with different compilers?

    - by Joseph Garvin
    I realize that in general the C and C++ standards gives compiler writers a lot of latitude. But in particular it guarantees that POD types like C struct members have to be laid out in memory the same order that they're listed in the structs definition, and most compilers provide extensions letting you fix the alignment of members. So if you had a header that defined a struct and manually specified the alignment of its members, then compiled two apps with different compilers using the header, shouldn't one app be able to write an instance of the struct into shared memory and the other app be able to read it without errors? I am assuming though that the size of the types contained is consistent across two compilers on the same architecture (it has to be the same platform already since we're talking about shared memory). I realize that this is not always true for some types (e.g. long vs. long long in GCC and MSVC 64-bit) but nowadays there are uint16_t, uint32_t, etc. types, and float and double are specified by IEEE standards.

    Read the article

  • How do I synchronize access to shared memory in LynxOS/POSIX?

    - by GrahamS
    I am implementing two processes on a LynxOS SE (POSIX conformant) system that will communicate via shared memory. One process will act as a "producer" and the other a "consumer". In a multi-threaded system my approach to this would be to use a mutex and condvar (condition variable) pair, with the consumer waiting on the condvar (with pthread_cond_wait) and the producer signalling it (with pthread_cond_signal) when the shared memory is updated. How do I achieve this in a multi-process, rather than multi-threaded, architecture? Is there a LynxOS/POSIX way to create a condvar/mutex pair that can be used between processes? Or is some other synchronization mechanism more appropriate in this scenario?

    Read the article

  • How to log slow queries in shared hosting MySQL?

    - by tomaszs
    I have a shared hosting where I have my website and MySQL database. I've installed a open source script for statistics (phpMyVisites) and it started to work very slow lately. It's written using some kind of framework and has many PHP files. I know that to find slow queries I can use slow query log functionality in MySQL. But on this shared hosting I can not use this method because I can not change my.cnf. I don't want to change my statistics script to other and I don't want to mess around with all files of this script to find out where to put diagnostics code to log queries manually. I would like to do it without changes in PHP code. So my question is: How to log slow queries in these coditions?: Can't change my.cnf to enable slow query log Can't change statistics script to other Don't know how scrpt is written and where mysql commands are issued Can't ask my provider for slow query log Is there any method to do this in simple, easy, fast way?

    Read the article

  • Is there a .def file equivalent on Linux for controlling exported function names in a shared library

    - by morpheous
    I am building a shared library on Ubuntu 9.10. I want to export only a subset of my functions from the library. On the Windows platform, this would be done using a module definition (.def) file which would contain a list of the external and internal names of the functions exported from the library. I have the following questions: How can I restrict the exported functions of a shared library to those I want (i.e. a .def file equivalent) Using .def files as an example, you can give a function an external name that is different from its internal name (useful for prevent name collisions and also redecorating mangled names etc) On windows I can use the EXPORT command (IIRC) to check the list of exported functions and addresses, what is the equivalent way to do this on Linux?

    Read the article

  • Is there a .def file equicalent on Linux for controlling exported function names in a shared library

    - by morpheous
    I am building a shared library on Ubuntu 9.10. I want to export only a subset of my functions from the library. On the Windows platform, this would be done using a module definition ( .def) file which would contain a list of the external and internal names of the functions exported from the library. I have the following questions: How can I restrict the exported functions of a shared library to those I want (i.e. a .def file equivalent) Using .def files as an example, you can give a function an external name that is different from its internal name (useful for prevent name collisions and also redecorating mangled names etc) On windows I can use the EXPORT command (IIRC) to check the list of exported functions and addresses, what is the equivalent way to do this on Linux?

    Read the article

  • What is the Effect of Declaring 'extern "C"' in the Header to a C++ Shared Library?

    - by Adam
    Based on this question I understand the purpose of the construct in linking C libraries with C++ code. Now suppose the following: I have a '.so' shared library compiled with a C++ compiler. The header has a 'typedef stuct' and a number of function declarations. If the header includes the extern "C" declaration... #ifdef __cplusplus extern "C" { #endif // typedef struct ...; // function decls #ifdef __cplusplus } #endif ... what is the effect? Specifically I'm wondering if there are any detrimental side effects of that declaration since the shared library is compiled as C++, not C. Is there any reason to have the extern "C" declaration in this case?

    Read the article

  • How to correctly configure server for Symfony (on shared hosting)?

    - by Eugene
    Hi! I've decided to learn Symfony and right now I am reading through the very start of the "Practical Symfony" book. After reading the "Web Server Configuration" part I have a question. The manual is describing how to correctly configure the server: browser should have access only to web/ and sf/.../ directories. The manual has great instructions regarding this and being a Linux user I had no problem following them and making everything work on my local machine. However that involves editing VirtualHost entries which normally is not easy to do on common shared hosting servers. So I wonder what is the common technique that Symfony developers use to get the same results in shared hosting environment? I think I can do that by adding "deny from all" in the root and then overwriting that rule in the allowed directories. However I am not sure if that's the easiest way and the way that is normally used.

    Read the article

  • Shared secret length limit on OSX VPN client

    - by Samuel
    I'm trying to setup the built-in VPN client with OS X. The settings I'm using (IPsec GW, shared secret, etc...) work flawlessly using other clients (IPsecuritas, vpnc, etc...) but isn't working with the built-in client. The error I get is: Wrong shared secret (not the exact message, since OS X is localized) The shared secret is 128 chars long so I'm wondering if it's hitting a length limit. I would like to know if that's true, and if so, how I could overcome it?

    Read the article

  • How to create a shared lock blocking an intent exclusive lock

    - by FremenFreedom
    As I understand it, a SELECT statement will place a shared lock on the rows that it will return. While that SELECT is running, if an UPDATE statement comes along and needs to grab an intent exclusive lock then that UPDATE statement will need to wait until the SELECT statement releases its shared locks. I am trying to test this SELECT shared lock thing by doing a BEGIN TRAN and then running a SELECT, not COMMITing, and then running an UPDATE in another session on the exact same row. The UPDATE worked fine -- no lock, no wait. So this must not be a valid way to simulate a shared lock blocking an intent exclusive lock? Can you give me a scenario where I can create a lock with a SELECT that would force an UPDATE to wait? I'm working with SQL Server 2000 and 2005 across a linked server: the table is on the 2005 instance, the select is happening on 2000, and the update is executed from 2005. All in SSMS 2005.

    Read the article

  • how to map sub domain to amazon ec2 and main domain mapped to shared hosting

    - by user330415
    I am trying to map subdomain to amazon ec2 with elastic IP. I already mapped the www.xxxexample.com to my shared hosting by giving the dns server name (ns1.justhost.com). And I created a many subdomains using the cpanel of shared hosting. Shared hosting is working fine. Amazon route53 is a paid service. So I dont want to use that service. I want to map my subdomain points to amazon ec2 instance and main domain already primarily mapped to shared hosting. I tried from the below example, nothing worked for me, getting my domain name to point to my amazon ec2 instance Can anybody help me to get rid of this issue?? Thanks in advance.

    Read the article

  • Use shared 404 page for virtual hosts in Nginx

    - by Choy
    I'd like to have a shared 404 page to use across my virtual hosts. The following is my setup. Two sites, each with their own config file in /sites-available/ and /sites-enabled/ www.foo.com bar.foo.com The www directory is set up as: www/ foo.com/ foo.com/index.html bar.foo.com/ bar.foo.com/index.html shared/ shared/404.html Both config files in /sites-available are the same except for the root and server name: root /var/www/bar.foo.com; index index.html index.htm index.php; server_name bar.foo.com; location / { try_files $uri $uri/ /index.php; } error_page 404 /404.html; location = /404.html { root /var/www/shared; } I've tried the above code and also tried setting error_page 404 /var/www/shared/404.html (without the following location block). I've also double checked to make sure my permissions are set to 775 for all folders and files in www. When I try to access a non-existent page, Nginx serves the respective index.php of the virtual host I'm trying to access. Can anyone point out what I'm doing wrong? Thanks!

    Read the article

  • Mount shared folder (vbox) as another user

    - by jlcd
    I'm trying to mount my vbox shared folder every time my ubuntu starts. So, I added an entry on /etc/init with this: description "mount vboxsf Desktop" start on startup task exec mount -t vboxsf Desktop /var/www/shared Seems to work, except by the fact that all the files are owned by "root", and I don't have permission to write on the folder (neither chmod nor chown seems to be working). So, how can I make all the files under this shared folder to be owned by www-data user/group? Thanks ps.: The main reason for me to have an automatic shared folder, is so I can create/edit files from the HOST on the GUEST www folder. If you have a better idea for that, instead of sharing the folder, fell free to say.

    Read the article

  • How to automount SMB shared network drives in Mac OS X Lion

    - by cyppher
    In Mac OS X 10.7 (Lion) Apple has replaced good old SMB support. Now I can't auto connect to my shared (SMB) network drives. Workarounds? Or Impossible? In OS X Snow Leopard, I could automatically connect my Ubuntu (SMB) shared network drives with auto_smb / auto_master (autofs configuration in /private/etc/). I made three mount points (folders) directly in '/Volumes', I used /Volumes/Data and /Volumes/webroot (both SMB shared). Unfortunately Lion doesn't connect (automount) my network drives. I have to manually connect to the server (Ubuntu file server) in Finder, then open up Terminal to navigate to the mount points, and then it connects. This is not a workable solution. I've searched (Google/SO) but found no solutions apart from an unsupported hack. Isn't it possible any more to automatically connect to an SMB-shared drive during startup?

    Read the article

  • Sharing my home folders with other users on the same PC

    - by Stephen Myall
    After reviewing similar questions on the same subject Im still none the wiser. I want to share my music, pictures and video folders with other users on my pc. I am using 11.10 and will be upgrading to 12.04. The method I have tried is to right click on the folder (as Administrator), select "Sharing Options" check all the necessary fields and give the share a name like "music-shared". Another dialog pops up then and I select "Set nautilus Permissions". When the other user logs on they go to their Home folder click on the network and can see the "music-shared" folder, but they get a message that the do not have the necessary permissions to view the content. Im sure I'm missing something simple. My Home folder is encrypted and i am willing to unencrypt to make this work Unlike other questions on this site, I dont have a partition etc. i would be grateful for any help.

    Read the article

  • Debian - error while loading shared libraries

    - by Jirí Valoušek
    i have an problem with script DocToText from Silvercoders.com on my 64bit Debian Squeeze. It works properly on another 32bit machine, but on this i have still problem with some .so module. # file /bin/bash /bin/bash: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, stripped if i run doctotext.sh it`s return an error: ./doctotext: error while loading shared libraries: libgsf-1.so.114: cannot open shared object file: No such file or directory please, can you help?

    Read the article

  • Shared files folder in Amazon Elastic Beanstalk environment

    - by por
    I'm working on a Drupal application, which is planned to be hosted in Amazon Elastic Beanstalk environment. Basically, Elastic Beanstalk enables the application to scale automatically by starting additional web server instances based on predefined rules. The shared database is running on an Amazon RDS instance, which all instances can access properly. The problem is the shared files folder (sites/default/files). We're using git as SCM, and with it we're able to deploy new versions by executing $ git aws.push. In the background Elastic Beanstalk automatically deletes ($ rm -rf) the current codebase from all servers running in the environment, and deploys the new version. The plan was to use S3 (s3fs) for shared files in the staging environment, and NFS in the production environment. We've managed to set up the environment to the extent where the shared files folder is mounted after a reboot properly. But... The Problem is that, in this setup, the deployment of new versions on running instances fail because $ rm -rf can't remove the mounted directory, and as result, the entire environment goes down and we need restart the environment, which isn't really an elegant solution. Question #1 is that what would be the proper way to manage shared files in this kind of deployment? Are you running such an environment? How did you solve the problem? By looking at Elastic Beanstalk Hostmanager code (Ruby) there seems be a way to hook our functionality (unmount if mounted in pre-deploy and mount in post-deploy) into Hostmanager (/opt/hostmanager/srv/lib/elasticbeanstalk/hostmanager/applications/phpapplication.rb) but the scripts defined in the file (i.e. /tmp/php_post_deploy_app.sh) don't seem to be working. That might be because our Ruby skills are non-existent. Question #2 is that did you manage to hook your functionality in Hostmanager in a portable way (i.e. by not changing the core Hostmanager files)?

    Read the article

  • Can't connect to shared folders anymore?

    - by HuskyHuskie
    My home server is running Windows Server 2008 R2. I've had it running for almost a year now without any issues with shared folders. This past week I had an issue with my modem which required it to be power cycled and with that I power cycled my router. After that I haven't been able to connect to my shared network folders. I have no idea why that would even cause an issue as I've power cycled my networking equipment in the past without issues and none of my settings appear to have been lost. I am mapping these drives on my Windows 7 Ultimate machine using "Map Network Drive", from there I enter \\SERVER\Storage as I'm trying to connect to my shared folder named Storage. I receive the following error every time I try mapping the drive: Windows cannot access \\Server\Storage Check the spelling of the name. Otherwise there might be a problem with your network. To try to identify and resolve network problems, click Diagnose. Details: Error code: 0x80070035 The network path was not found. When I click Diagnose I get the following: Problems found file and print sharing resource (SERVER) is online but isn't responding to connection attempts. The remote computer isn't responding to connection on port 445, possibly due to firewall or security policy settings, or because it might be temporarily unavailable. Windows couldn't find any problems with the firewall on your computer. I've tried this from multiple computers with the same issue too. To resolve the problems so far I've tried: Disabling the firewall on SERVER Reinstalling File Services Modifying NetBT\Parameters registry values Adding a custom inbound rule for port 445 Adding port forwarding on my router for port 445 Recreating the shared folders Checking and rechecking the shared folder permissions. Resetting my user account password on the server used to access the shared folder. I'm pulling my hair out with this problem mainly because it came out of nowhere. It was working fine the night before and the next day it just stopped working. Any ideas of what I could try next are much appreciated. It should also be noted that this server is used as a web server too and that functionality still works correctly.

    Read the article

  • Install unetbootin on Ubuntu 12.04

    - by Matteo
    I'm trying to install UNetbootin on Ubuntu 12.04 LTS. I downloaded the executable file from this link and followed the instructions below: If using Linux, make the file executable (using either the command chmod +x ./unetbootin-linux, or going to Properties-Permissions and checking "Execute"), then start the application, you will be prompted for your password to grant the application administrative rights, then the main dialog will appear, where you select a distribution and install target (USB Drive or Hard Disk), then reboot when prompted.\ So I typed on my terminal sudo chmod +x unetbootin-linux-584 and tried to execute the binary file with ./unetbootin-linux-584 but got this output: ./unetbootin-linux-584: error while loading shared libraries: libXrandr.so.2: cannot open shared object file: No such file or directory However when I checked for libraries libXrandr on my system I actually found them $> locate libXrandr /usr/lib/x86_64-linux-gnu/libXrandr.so.2 /usr/lib/x86_64-linux-gnu/libXrandr.so.2.2.0 /usr/lib/x86_64-linux-gnu/libXrandr_ltsq.so.2 /usr/lib/x86_64-linux-gnu/libXrandr_ltsq.so.2.2.0 so I really don't have a clue of what's the problem and how can I fix it, any ideas?

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >