Search Results

Search found 35177 results on 1408 pages for 'shared object'.

Page 37/1408 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Shared SQL Server 2008

    - by nazaf
    Hi, I have a Windows Hyper VPS plan with 1024 MB of RAM. After installing SQL Server 2008 Express, my memory usage went up to 75% without running my site yet. I know that SQL Server consumes a lot of memory, so I decided to host my DB on a shared server. Which of the following is more scalable: install my DB on my VPS, or on a shared server ? If the latter, then can you recommend me a good shared server? Thanks.

    Read the article

  • Unable to access Windows 7 shared folder with Windows 98

    - by PabloG
    I'm unable to access a Windows 7 (Windows 7 Pro 64-bit) shared folder from an old Windows 98 box: I tried with: Turning on file and printer sharing Turning on public folder sharing Turning off password protected sharing Sharing the folder with read permissions to Everyone Lowering the encryption to 40-56 bits. The shared folder works fine using it from Windows XP, and even from Linux with CIFS / Samba, but when I try to use it from Win98 with: NET USE X: \\SERVER\SHARE an user / password dialog pops up. I entered the administrator's user / password from my Windows 7 box, but it doesn't work (incorrect password). The same Win98 machine works fine accessing a Windows XP shared folder, so it looks like a Windows 7 networking issue. Any ideas?

    Read the article

  • unable to access shared drives n win7

    - by colin
    OK, this is doing my head in. Not your usual Win7 sharing problem. Just built a win 7 comp to go onto my home lan and be accessed by all on the network, mostly running XP SP3. Installed it as a single HD+DVD system, got it happy, then added my storage drives, set them up in the right order and letters, rebooted, and shared them. Couldn't access the computer at all from any XP machine. Set the pasword thingy in win7 to NO, and now I can "see" all four shared drives from any machine, but can access only two of them. Please tell me whats going on as I've done nothing different from one drive to the other, just installed and set up drive letters as normal, then shared the damn things. The odd thing about it? the two 1.5T drives that have a lot of data on board are accessable, the two nearly empty 500G ones are not. ANY ideas? Cheers, Colin

    Read the article

  • Running remotely an app from a shared folder with PsExec

    - by Stephane
    I am actually not sure that this is possible. let's see: I have a script that runs on a Build server. Let's name this server A. It drops the bins to a shared folder on server B. And I want to run the program on server C. So using caspol I can allow the executable to be ran remotely. that means from B I can run \C\shared\my.exe What I want to do is from A run \C\shared\my.exe on B. SysInternals\PsExec.exe -u username -p password -accepteula \\ServerC -i 0 -d -w \\ServerB\Nightly\Server \\ServerB\Nightly\Server\server.exe The user has all the necessary rights. But, the -w (working directory) options apparently wants a path relative to the server I point to. Any idea?

    Read the article

  • Running remotely an app from a shared folder with PsExec

    - by Stephane
    I am actually not sure that this is possible. let's see: I have a script that runs on a Build server. Let's name this server A. It drops the bins to a shared folder on server B. And I want to run the program on server C. So using caspol I can allow the executable to be ran remotely. that means from B I can run \C\shared\my.exe What I want to do is from A run \C\shared\my.exe on B. SysInternals\PsExec.exe -u username -p password -accepteula \\ServerC -i 0 -d -w \\ServerB\Nightly\Server \\ServerB\Nightly\Server\server.exe The user has all the necessary rights. But, the -w (working directory) options apparently wants a path relative to the server I point to. Any idea?

    Read the article

  • Windows7 access to Printer shared with XP ?

    - by chmike
    I have, at home, an eeebox running with XP 24/24 with an attached printer (Canon IP5300) installed as shared. We have a few other laptops and PC, all with vista, that can access and print on the shared printer without problem. We just received a new Dell computer with Windows7-64 on it, but it fails to connect the shared printer. I tried connecting the printer with its USB cord directly to the Windows7 PC and the required driver was automatically downloaded and installed. I could then access the printer specific properties, etc. But if I connect it back to the XP computer, the windows7 PC still refused to connect to the remote printer although it now have the drivers. The windows7 is a family pack. By the way, I also have an old canon scanner still perfectly working with XP, but for which I can't find compatible drivers for windows 7. Do I have to buy a new one ? Any help would be welcome.

    Read the article

  • Setup secure shared hosting (Apache, PHP, MySQL)

    - by Apaz
    So I'm setting up a shared hosting with Apache, PHP, MySQL and the biggest question mark is how to do with PHP, since there is a million options out there how to configure it securely. The plan is: Chroot for MySQL (built in support for chroot) Chroot for Apache (mod_security) Each user executing their PHP-scripts as their own user (see below) Set open_basedir Disable all "evil" php-functions (allow_url_fopen, system, exec, and so on) Ive looked at suexec and suphp but they seems very slow; http://blog.stuartherbert.com/php/2007/12/18/using-suexec-to-secure-a-shared-server/ http://blog.stuartherbert.com/php/2008/01/18/using-suphp-to-secure-a-shared-server/ So I've looked some more and found some other solutions: apache2-mpm-itk + mod_php(?) mod_fcgid + php-fpm mod_fastcgi + php-fpm Ive tried a simple setup with mod_fastcgi + php-fpm and it seems to work, runs as correct user and so on, but the protection against directory traveling is still open_basedir(?) One solution for that could be to use php-fpm's chroot option, but that causes a lot of other issues like domain name resolver does not work sending mail does not work Tips?

    Read the article

  • How can I set Out-Of-Office in a shared mailbox

    - by balexandre
    I would want to set the out-of-office automatic response to all emails that arrive to our [email protected]. currently in the Outlook, I only have one mailbox (the user mailbox) but it has 2 shared mailboxes setup. I have tried to create a Rule that says: for all email received on account [email protected] forward to user [email protected] and make that user to set up the Out-of-office message, but it simply did not work, and I suspect that the rules only apply to the user account and not the shared account... How can I set Out-Of-Office in this shared mailbox ?

    Read the article

  • Apache2 server does not start cannot pen shared object file

    - by sid__
    I am working with Apache and Passenger for a Rails project. And a during a restart I got the following error Cannot load /usr/lib/ruby/gems/1.8/gems/passenger-2.2.11/ext/apache2/mod_passenger.so into server: /usr/lib/ruby/gems/1.8/gems/passenger-2.2.11/ext/apache2/mod_passenger.so: cannot open shared object file: No such file or directory However there is no change in the apache configuration file. I have attached the snippet from the conf file 287 LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-2.2.11/ext/apache2/mod_passenger.so 288 PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-2.2.11 289 PassengerRuby /usr/bin/ruby1.8 I am also unable to locate the shared object file in the location pointed to by the server though I am not really sure how the .so file is works (created/destroyed) I would also appreciate it if someone could explain to me what exactly has happened. I understand the shared object file is mission, what could be the reason it got deleted.

    Read the article

  • Wrong owner and group for files created under a samba shared directory

    - by agmao
    I am trying to make writing to a shared samba directory work. I got a very weird problem. Now the shared directory is writable from a client machine. But the files created under the samba share directory have weird owner and group names. I am writing to the shared directory as user mike under the client machine, but the file created always has user and group name as steve instead... Does anybody know why that would happen...? Another thing I just noticed is that on the samba server, the files have owner and user name as samba, which I created for samba clients. Thanks a lot

    Read the article

  • VPN pre-shared key problems

    - by Owl
    I have two vpns set up on a Symantec Gateway Security 320. VPN 1 goes to a Symantec Firewall/VPN 100 to another clinic of ours and every hour they lose connectivity and the error log on the Firewall/VPN100 shows an invalid pre-shared key error, although, both devices show the same pre-shared key entered. VPN 2 goes to our software vendor to use an additional part of our program. I am unable to ping the remote address and so is the other company, but my VPN status shows it is connected. They have told me the pre-shared key seemed to be automatically trying to resubmit itself as if it were incorrect, about every hour even though it is correct. They also told me port80 traffic was closed but I show the HTTP service using 80 redirected to 80 in my firewall settings. Please help.

    Read the article

  • VirtualBox Shared Folder encoding issue

    - by Somebody
    I'm using Ubuntu in Virtualbox and have a shared folder mounted to Virtualbox which i'm accessing inside Ubuntu. The problem is, that when i'm editing and saving some files from shared folder in Windows it's getting some strange symbols at the end of edited file. There must be some encoding issues. Doesn't Virtualbox automatically converts files to Unix standards? To fix that, i have to re-mount shared folder inside Ubuntu each time i'm editing some file. Any solution to avoid re-mounting each time I edit? I'm mounting like that: mount -t vboxsf SVN /opt/htdocs/ Thanks.

    Read the article

  • Windows 7: Shared folder over wifi working for ONLY "Guests"

    - by James_Smith
    Hi, I have a desktop and a laptop connected to same wifi router. Desktop is connected with wire and laptop with wifi. Both the system runs windows 7 and are on the same workgroup. I have shared some folders on desktop and can view the shared folder list on laptop under "network places". But when I try to open a folder, a prompt appears for username and password. When I enter BOTH username and password, It does not authenticate when I enter ONLY username, I get:- "Windows cannot access \WIn71\Setups you do not have permission to access ..." Now to get around this message I have to give access to "Guests" group on my desktop shared folder. Cant seem to figure out why It cant authenticate username with password. And giving access to "Guests" does not sounds safe!

    Read the article

  • Does program need additional symbols from .so shared library except those declared in header file?

    - by solotim
    In C programming, I thought that a object file can be successfully linked with a .so file as long as the .so file offers all symbols which have been declared in the header file. Suppose I have foo.c, bar.h and two libraries libbar.so.1 and libbar.so.2. The implementation of libbar.so.1 and libbar.so.2 is totally different, but I think it's OK as long as they both offers functions declared in bar.h. I linked foo.o with libbar.so.1 and produced an executable: foo.bin. This executable worked when libbar.so.1 is in LD_LIBRARY_PATH.(of course a symbolic link is made as libbar.so) However, when I change the symbolic link to libbar.so.2, foo.bin could not run and complainted this: undefined symbol: _ZSt4cerr I found this symbol type is 'B' and it does not appear in libbar.so.2. Obviously this symbol has nothing to do with those functions I decared in bar.h. What's wrong here?

    Read the article

  • How to Inspect Javascript Object

    - by Madhan ayyasamy
    You can inspect any JavaScript objects and list them as indented, ordered by levels.It shows you type and property name. If an object property can't be accessed, an error message will be shown.Here the snippets for inspect javascript object.function inspect(obj, maxLevels, level){  var str = '', type, msg;    // Start Input Validations    // Don't touch, we start iterating at level zero    if(level == null)  level = 0;    // At least you want to show the first level    if(maxLevels == null) maxLevels = 1;    if(maxLevels < 1)             return '<font color="red">Error: Levels number must be > 0</font>';    // We start with a non null object    if(obj == null)    return '<font color="red">Error: Object <b>NULL</b></font>';    // End Input Validations    // Each Iteration must be indented    str += '<ul>';    // Start iterations for all objects in obj    for(property in obj)    {      try      {          // Show "property" and "type property"          type =  typeof(obj[property]);          str += '<li>(' + type + ') ' + property +                  ( (obj[property]==null)?(': <b>null</b>'):('')) + '</li>';          // We keep iterating if this property is an Object, non null          // and we are inside the required number of levels          if((type == 'object') && (obj[property] != null) && (level+1 < maxLevels))          str += inspect(obj[property], maxLevels, level+1);      }      catch(err)      {        // Is there some properties in obj we can't access? Print it red.        if(typeof(err) == 'string') msg = err;        else if(err.message)        msg = err.message;        else if(err.description)    msg = err.description;        else                        msg = 'Unknown';        str += '<li><font color="red">(Error) ' + property + ': ' + msg +'</font></li>';      }    }      // Close indent      str += '</ul>';    return str;}Method Call:function inspect(obj [, maxLevels [, level]]) Input Vars * obj: Object to inspect * maxLevels: Optional. Number of levels you will inspect inside the object. Default MaxLevels=1 * level: RESERVED for internal use of the functionReturn ValueHTML formatted string containing all values of inspected object obj.

    Read the article

  • Of these 3 methods for reading linked lists from shared memory, why is the 3rd fastest?

    - by Joseph Garvin
    I have a 'server' program that updates many linked lists in shared memory in response to external events. I want client programs to notice an update on any of the lists as quickly as possible (lowest latency). The server marks a linked list's node's state_ as FILLED once its data is filled in and its next pointer has been set to a valid location. Until then, its state_ is NOT_FILLED_YET. I am using memory barriers to make sure that clients don't see the state_ as FILLED before the data within is actually ready (and it seems to work, I never see corrupt data). Also, state_ is volatile to be sure the compiler doesn't lift the client's checking of it out of loops. Keeping the server code exactly the same, I've come up with 3 different methods for the client to scan the linked lists for changes. The question is: Why is the 3rd method fastest? Method 1: Round robin over all the linked lists (called 'channels') continuously, looking to see if any nodes have changed to 'FILLED': void method_one() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { Data* current_item = channel_cursors[i]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[i] = static_cast<Data*>(current_item->next_.get(segment)); } } } Method 1 gave very low latency when then number of channels was small. But when the number of channels grew (250K+) it became very slow because of looping over all the channels. So I tried... Method 2: Give each linked list an ID. Keep a separate 'update list' to the side. Every time one of the linked lists is updated, push its ID on to the update list. Now we just need to monitor the single update list, and check the IDs we get from it. void method_two() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { if(update_cursor->state_ == NOT_FILLED_YET) { continue; } ::uint32_t update_id = update_cursor->list_id_; Data* current_item = channel_cursors[update_id]; if(current_item->state_ == NOT_FILLED_YET) { std::cerr << "This should never print." << std::endl; // it doesn't continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[update_id] = static_cast<Data*>(current_item->next_.get(segment)); update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } } Method 2 gave TERRIBLE latency. Whereas Method 1 might give under 10us latency, Method 2 would inexplicably often given 8ms latency! Using gettimeofday it appears that the change in update_cursor-state_ was very slow to propogate from the server's view to the client's (I'm on a multicore box, so I assume the delay is due to cache). So I tried a hybrid approach... Method 3: Keep the update list. But loop over all the channels continuously, and within each iteration check if the update list has updated. If it has, go with the number pushed onto it. If it hasn't, check the channel we've currently iterated to. void method_three() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { std::size_t idx = i; ACQUIRE_MEMORY_BARRIER; if(update_cursor->state_ != NOT_FILLED_YET) { //std::cerr << "Found via update" << std::endl; i--; idx = update_cursor->list_id_; update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } Data* current_item = channel_cursors[idx]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } found_an_update = true; log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[idx] = static_cast<Data*>(current_item->next_.get(segment)); } } } The latency of this method was as good as Method 1, but scaled to large numbers of channels. The problem is, I have no clue why. Just to throw a wrench in things: if I uncomment the 'found via update' part, it prints between EVERY LATENCY LOG MESSAGE. Which means things are only ever found on the update list! So I don't understand how this method can be faster than method 2. The full, compilable code (requires GCC and boost-1.41) that generates random strings as test data is at: http://pastebin.com/e3HuL0nr

    Read the article

  • OJB Reference Descriptor 1:0 relationship? Should I set auto-retrieve to false?

    - by godzillasdm
    Hi, I am having an issue while using Apache OJB with Spring 2 inside my web app. I'm using OJB reference-descriptor with 2 foreign key properties. I have an object A (parent) and object B (referenced object). The thing is, for an object A, there may or may not be an object B. In the case where there is no object B to go with Object A, the object B seems to be instantiated (through Spring?) anyways. However, I am unable to access object B's members. Whenever I test if Object B == null, it always returns false even though there is no matching value in the database. Since this Object is never null, I figured I can test the object's member like so: if( objectb.getDocumentNumber == null) { return false; } However, I get an exception in the jsp: javax.servlet.jsp.el.ELException: An error occurred while getting property "documentNumber" from an instance class org.sample.pojo.Objectb$$EnhancerByCGLIB$$78022a2 and this exception in the debugger when it's creating the objectB: com.sun.jdi.InvocationException occurred invoking method. I am guessing that the reference-descriptor must be a 1:1+ relationship, instead of a 1:0+ relationship. I was wondering if I should set the property 'auto-retrieve' to false, and then use the PersistenceBroker.retrieveAllReferences(Object obj); method as directed. However, this method's return value is 'void', so I am guessing that Spring somehow creates, and sets the reference class for me. (Returning me back to the same issue I'm having.) I will need a way to test whether the reference object exists first. If not, don't call this retrieveAllReferences method, but I don't see how. Am I going about this all wrong? Does reference-descriptor not allow 1:0 relations? Any work around to my problem? Your suggestions are greatly appreciated!

    Read the article

  • What is the role of web hosting in SEO [closed]

    - by Vinay
    Possible Duplicate: Does changing web hosting server affects SEO page ranking? SEO Geolocation What are the best ways to increase a site's position in Google? How to find web hosting that meets my requirements? I have read somewhere that hosting providers do play a role in website SEO, As my website is hosted in yahoo small business, That has got analytics and some other tool they provide to check the keyword activity, I think these can be achieved with the google's analytics as well, Server performance and Uptime is one important factor. I have also got few doubts in my mind 1) Does shared hosting affect the SEO and what is the role of domain extension like .com, .in, .org ,etc. 2) Does server geolocation affects the SEO 3) Does server OS affects the SEO. Apart from the above, Is there any factors that affect the SEO One more last question that If hosting really matters lot, can you suggest me a web hosting service for a small business e commerce site for PHP

    Read the article

  • Develop website locally and push updates on Remote Server using Git

    - by John
    Together with a friend we are looking to develop a website (using Symfony2). We are on a Shared Hosting with SSH access. Below is the environment we'd like to setup: * Use git as Version Control (we are new to Git) * Share the tasks and develop on our local machines * Push the updates onto the remote server Here's our initial thoughts on how to do it (assuming Git is already running both locally and remotely): * Install Symfony on the Remote Server (basic setup) * Get a clone (using Git) of the project locally * Develop project locally and push updates (using Git) on the remote server Does this approach make sense, if not, any recommendations? Thanks

    Read the article

  • OpenGL (glx) not working with ubuntu 12.10

    - by user26766
    It all started when I installed nvidia's own (download from website) driver. Uninstalling it and reverting back to nvidia-current didn't solve the problem, so I have been playing with this for a while. It seems glx support is missing, and my intel graphics is not responding. gnome loads only in fallback mode. Here are some outputs: glxinfo name of display: :0.0 Error: couldn't find RGB GLX visual or fbconfig glxgears Error: couldn't get an RGB, Double-buffered visual optirun glxgears works fine lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [GeForce GT 540M] (rev ff) Here's the content of log file when just after running glxinfo: less /var/log/Xorg.0.log | grep gl [ 112.156] (II) LoadModule: "glx" [ 112.157] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 112.157] (II) Module glx: vendor="X.Org Foundation" [ 112.157] (II) UnloadModule: "glx" [ 112.157] (II) Unloading glx [ 112.157] (EE) Failed to load module "glx" (module requirement mismatch, 0) Some more info... lsmod Module Size Used by bbswitch 13612 0 pci_stub 12623 1 vboxpci 23195 0 vboxnetadp 25671 0 vboxnetflt 23480 0 vboxdrv 320372 3 vboxpci,vboxnetadp,vboxnetflt parport_pc 32689 0 ppdev 17074 0 bnep 18141 2 rfcomm 46620 12 binfmt_misc 17501 1 snd_hda_codec_hdmi 32049 1 snd_hda_codec_realtek 78147 1 joydev 17458 0 uvcvideo 76750 0 videobuf2_core 32852 1 uvcvideo videodev 120310 2 uvcvideo,videobuf2_core videobuf2_vmalloc 12861 1 uvcvideo videobuf2_memops 13405 1 videobuf2_vmalloc snd_hda_intel 33492 3 coretemp 13401 0 kvm_intel 132760 0 arc4 12530 2 snd_hda_codec 134213 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel kvm 414111 1 kvm_intel iwlwifi 386837 0 i915 520799 2 snd_hwdep 17699 1 snd_hda_codec snd_pcm 96668 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec psmouse 100389 0 drm_kms_helper 49113 1 i915 ghash_clmulni_intel 13221 0 snd_seq_midi 13325 0 snd_rawmidi 30513 1 snd_seq_midi btusb 22475 0 drm 288436 3 i915,drm_kms_helper serio_raw 13216 0 snd_seq_midi_event 14900 1 snd_seq_midi snd_seq 61555 2 snd_seq_midi,snd_seq_midi_event mac80211 535936 1 iwlwifi bluetooth 209249 22 bnep,rfcomm,btusb snd_timer 29426 2 snd_pcm,snd_seq snd_seq_device 14498 3 snd_seq_midi,snd_rawmidi,snd_seq snd 78921 16 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device cfg80211 206797 2 iwlwifi,mac80211 aesni_intel 51038 1 cryptd 20404 2 ghash_clmulni_intel,aesni_intel aes_x86_64 17256 1 aesni_intel dell_wmi 12682 0 sparse_keymap 13891 1 dell_wmi dcdbas 14439 0 i2c_algo_bit 13414 1 i915 microcode 22804 0 lpc_ich 17062 0 soundcore 15048 1 snd snd_page_alloc 18485 2 snd_hda_intel,snd_pcm video 19391 1 i915 mac_hid 13206 0 wmi 19071 1 dell_wmi mei 40691 0 lp 17760 0 parport 46346 3 parport_pc,ppdev,lp ahci 25721 3 libahci 31192 1 ahci atl1c 41102 0 how can I fix this? any ideas? Here is another thing I've tried: sudo apt-get purge nvidia* sudo reboot sudo apt-get install bumblebee bumblebee-nvidia didn't make any difference. The most relevant post I found on the web was this: http://ubuntuforums.org/archive/index.php/t-1722306.html Here it's explained that the problem is with the priority of shared libraries that are loaded for glxinfo. Here's what I get when I look up the libraries: linux-vdso.so.1 => (0x00007fff6bf8b000) libGL.so.1 => /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1 (0x00007f22d6ccd000) libX11.so.6 => /usr/lib/x86_64-linux-gnu/libX11.so.6 (0x00007f22d6993000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f22d65d3000) libglapi.so.0 => /usr/lib/x86_64-linux-gnu/libglapi.so.0 (0x00007f22d63ad000) libXext.so.6 => /usr/lib/x86_64-linux-gnu/libXext.so.6 (0x00007f22d619b000) libXdamage.so.1 => /usr/lib/x86_64-linux-gnu/libXdamage.so.1 (0x00007f22d5f97000) libXfixes.so.3 => /usr/lib/x86_64-linux-gnu/libXfixes.so.3 (0x00007f22d5d91000) libX11-xcb.so.1 => /usr/lib/x86_64-linux-gnu/libX11-xcb.so.1 (0x00007f22d5b8f000) libxcb-glx.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-glx.so.0 (0x00007f22d5977000) libxcb.so.1 => /usr/lib/x86_64-linux-gnu/libxcb.so.1 (0x00007f22d5759000) libXxf86vm.so.1 => /usr/lib/x86_64-linux-gnu/libXxf86vm.so.1 (0x00007f22d5553000) libdrm.so.2 => /usr/lib/x86_64-linux-gnu/libdrm.so.2 (0x00007f22d5346000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f22d5129000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f22d4f25000) /lib64/ld-linux-x86-64.so.2 (0x00007f22d6f54000) libXau.so.6 => /usr/lib/x86_64-linux-gnu/libXau.so.6 (0x00007f22d4d20000) libXdmcp.so.6 => /usr/lib/x86_64-linux-gnu/libXdmcp.so.6 (0x00007f22d4b1a000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f22d4911000) It seems "mesa" instead of nvidia-current is being used. However I didn't find any obvious link to /lib/nvidia-current in /etc/ld.so.conf (where the directories containing the shared libraries are listed). I don't know if there's anything missing or if this is normal. thanks, UPDATE: The problem was solved by updating to Ubuntu 13.04. It seems bumblebee was to blame, but I'm not sure what was going wrong...

    Read the article

  • Does it matter that the TTL is higher than I've been told?

    - by Andy
    Ok, so the title isn't very descriptive, I know. Basically, I'm trying to configure Outlook.com to use my custom domain! I've followed the steps and made the account etc. and now I have the DNS settings to configure from Windows Live. I added the MX Entries and everything last night but Windows Live is still saying I need to prove my ownership of the domain. The only thing I can think of that I had to use a differrent TTL to the one provided because my web hoster will only allow a minimum of four hours, whereas Windows Live told me to configure the TTL as one hour. Would that stop anything? By the way, my web hoster is JustHost (Shared Hosting)

    Read the article

  • Podcast site - Serve audio files with CDN

    - by Bobe
    I am managing a small podcast website hosted on a shared server. Currently there are only eight or nine episodes, each of which are about 50 MB, so bandwidth is not really an issue at the moment. However, looking forward, would it be feasible to use a "free" CDN like Cloudflare to serve the audio files? If so, how would I set this up? I took a quick look at it before, and it seems you have to have your whole site routed (is that the right term?) through the CDN rather than just specific files or filetypes. I'd like some clarification on this.

    Read the article

  • How to dispose of a NET COM interop object on Release()

    - by mhenry1384
    I have a COM object written in managed code (C++/CLI). I am using that object in standard C++. How do I force my COM object's destructor to be called immediately when the COM object is released? If that's not possible, call I have Release() call a MyDispose() method on my COM object? My code to declare the object (C++/CLI): [Guid("57ED5388-blahblah")] [InterfaceType(ComInterfaceType::InterfaceIsIDispatch)] [ComVisible(true)] public interface class IFoo { void Doit(); }; [Guid("417E5293-blahblah")] [ClassInterface(ClassInterfaceType::None)] [ComVisible(true)] public ref class Foo : IFoo { public: void MyDispose(); ~Foo() {MyDispose();} // This is never called !Foo() {MyDispose();} // This is called by the garbage collector. virtual ULONG Release() {MyDispose();} // This is never called virtual void Doit(); }; My code to use the object (native C++): #import "..\\Debug\\Foo.tlb" ... Bar::IFoo setup(__uuidof(Bar::Foo)); // This object comes from the .tlb. setup.Doit(); setup-Release(); // explicit release, not really necessary since Bar::IFoo's destructor will call Release(). If I put a destructor method on my COM object, it is never called. If I put a finalizer method, it is called when the garbage collector gets around to it. If I explicitly call my Release() override it is never called. I would really like it so that when my native Bar::IFoo object goes out of scope it automatically calls my .NET object's dispose code. I would think I could do it by overriding the Release(), and if the object count = 0 then call MyDispose(). But apparently I'm not overriding Release() correctly because my Release() method is never called. Obviously, I can make this happen by putting my MyDispose() method in the interface and requiring the people using my object to call MyDispose() before Release(), but it would be slicker if Release() just cleaned up the object. Is it possible to force the .NET COM object's destructor, or some other method, to be called immediately when a COM object is released? Googling on this issue gets me a lot of hits telling me to call System.Runtime.InteropServices.Marshal.ReleaseComObject(), but of course, that's how you tell .NET to release a COM object. I want COM Release() to Dispose of a .NET object.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >