Search Results

Search found 91599 results on 3664 pages for 'user manual'.

Page 368/3664 | < Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >

  • Google Chrome sync: limit for bookmarks & extensions?

    - by Lyubomyr Shaydariv
    Actually, Chrome is my favorite web-browser, and one of its most powerful features is synchronizing the actual data into a Google account. For the last years I gained a lot of bookmarks and from time to time browse the extensions gallery to find new valuable ones. Really, synchronizing between my work and home PC's freed me from manual sync. And for the recent months I experience strange glitches. I guess it may be caused by a lot of stored bookmarks (potentially about 3K [in estimate], but please don't ask why :)) and extensions (about 130 installed but only 10-15 daily used). I can mention the following strange things: Recently added bookmarks sometimes are not synchronized (e.g. I put a bookmark at work, but it's not guaranteed I can see it that evening), despite about:sync indicates a good sync process. Sometimes recently modified bookmarks appear in either (let's call) last at home or last at work bookmark folders. Sometimes bookmarks are not synced at all. (Moreover, Chromium versions may even crash) Extensions are not synced now at all. Perhaps, there's another reason, but Google Mail Checker and Google Reader Notifier do not show indicators of incoming e-mails and news. ... I'm not sure but it looks like I might exceed Chrome internal sync limits... Is it right? Are there any workarounds, or should I make a massive bookmarks/extensions cleanup (I really don't want it :()? I mostly use Google Chrome Canary builds, and the my current one is 12.0.732.0. Thanks in advance. Update #1 (2001-04-19): I removed about 50 extensions that I'm not interested in (or that I consider as trash), and gained pretty some results: The extensions count is below 100 (exactly 97); The chrome://extensions page does not get slow (or even frozen) any more on enabling/disabling/uninstalling extensions; The extensions are seem to be synchronized now again.

    Read the article

  • MacBook Pro (OSX Lion) - shutdown automatically before reaching login screen

    - by mkk
    When I try to lunch my MacBook Pro I can see a progress bar on loading screen. It goes to 1/15 or something like this and then it shut downs - I cannot reach even login screen. It happened to me 2 months ago, I have 'fixed' this by formatting my hard drive and installing OSX (Lion) again. This time I think that situation is a little bit different - I am able to enter single-user mode by pressing cmd + s. I then type /sbin/fsck -yf, I get the error: ** Checking Journaled HFS Plus volume. The volume name is Macintosh HD ** Checking extents overflow file. ** Checking catalog file. Invalid node structure (4, 24704) ** The volume Macintosh HD could not be verified completely. /dev/rdisk0s2 (hfs) EXITED WITH SIGNAL 8 but when I type exit, I can the login screen and I can log in. I tried a lot of things, booting from recovery partition and choosing disk utility to repair the disc, but I get error that it cannot be repaired. I have googled for hours and the only real solution I have found was to buy Disc warrior that might fix the issue. Any other suggestions? Secondary question is what causes this issue? I thought the reason are bad sectors, but Smart Utility haven't found any. I found suggestion that RAM could cause this kind of issue as well, so I downloaded rember and made memory test - all tests passed. Right now I have used my solution of entering single-mode user and then typing exit, however I am not sure how long it will 'work'. Of course I have back-uped what I considered important. Thanks for the help in advance! UPDATE: I guess Smart Utility was not very useful, I mnaged to get input/output error, which I believe is equivalent to bad sector.

    Read the article

  • nxclient crashes when trying to open a terminal from a remote client through "ssh -Y"

    - by user167328
    I support around 150 linux machines. I have 2 virtual machines on an ESXi server which I access via nxmachine v3 from a windows 7 box. These machines run CentOS5 with KDE and Lubuntu12.04.1 and they are the admin GUIs from which I support the 150 machines. The linux machines which I manage are redhat4/5, CentOS5 and ubuntu 10 and 12. Normally I contact the machines via ssh -Y. Today I did an ssh -Y to a remote machine which is running Ubuntu 12.10 and ssh 6.0p1. Then I tried to open an lxterminal on the remote machine which should display on my KDE desktop. This immediately and reproducably crashed my nxclient session. I tried again from my lubuntu system with the same effect. I have not observed the phenomenon from other machines yet. The message log on my KDE host shows: Unexpected termination of nxagent because of signal: 11 Logger::log nxnode 3920 Googling for this revealed no usable answer. Does anybody have a clue what is going on here or can give a hint how to solve the issue? Add On: I asked the user at the remote machine to export his DISPLAY to my host and open an lxterminal. This worked without problems i. e. the nxclient did not crash. Then the user tried to send me xeyes and this also killed the nxclient with the same error message found in the message log as above. This makes me suspect that the problem is not solely connected to ssh but maybe to some library stuff.

    Read the article

  • Netgear GS724Tv3 and link aggregation Mac OS X Server 10.6.8

    - by Manca Weeks
    I need to link aggregate 2 sets of ports on the Netgear GS724T with my Apple server tower (latest generation). I have 2 built in ports and 2 ports on a PCIe ethernet card. It is not obvious to me how to properly configure the Netgear end. I have access to the Netgear box through its web interface, just don't know how to properly set the settings. I tried going to Netgear for help, but they said my software support has expired. I bought this unit on their recommendation - they say it is compatible with 802.3ad protocol. I cannot locate any references to this protocol in the manual and I noticed some people in formus say that this device is actually not compatible with 802.3ad and that Netgear is misleading potential customers by saying it is. Any help will be appreciated. Thanks, M My own answer - posted as edit because of restrictions on my user: OK folks, turns out one must use a Windows machine on this one or nothing makes sense. I was unable to get much farther than viewing the default inactive LAGs because in Firefox and Safari on Mac things don't make much sense - i.e. the Apply buttons (supposedly JavaScript) don't work. You can view the configurations, but none of the modifications you make stick. Then, in Switching - LAGs, choose the ports to include and make sure you switch the LAG type from Static to LACP and all is well. Haven't tested the performance of the config yet, but both sides appear to be happy with the configuration. Apple server says link active and so does the Netgear. Will report if any other discoveries. Thanks for all who read and to user84104 for responding. M

    Read the article

  • Mac OS Leopard: SyncServer process constantly using 100% CPU

    - by macca1
    I am running Leopard that I upgraded from Tiger. I've been noticing that every once in a while the SyncServer process starts up and eats up all the CPU. The fans will start going at full blast and the laptop will slow down to a crawl. I need to force quit the process from Activity Monitor to get it under control. It disappears for a while, but eventually gets started again. I do have an iphone as well that I sync so I'm wondering if syncServer might be an apple process checking for my phone plugged in. Edit: Tried iSync and the manual resetsync as suggested, but got this output: Vince-2:~ vince$ /System/Library/Frameworks/SyncServices.framework/Versions/A/Resources/resetsync.pl full 2010-03-12 08:03:50.230 perl[176:10b] SyncServer is unavailable: exception when connecting: connection timeout: did not receive reply PerlObjCBridge: NSException raised while sending reallyResetSyncData to NSObject object name: "ISyncServerUnavailableException" reason: "Can't connect to the sync server: NSPortTimeoutException: connection timeout: did not receive reply ((null))" userInfo: "" location: "/System/Library/Frameworks/SyncServices.framework/Versions/A/Resources/resetsync.pl line 16" ** PerlObjCBridge: dying due to NSException Vince-2:~ vince$ And during that syncServer started spinning up 95-100% just like it always does.

    Read the article

  • Problems with connecting Thunderbird client to dovecot installed on Ubuntu

    - by Michael Omer
    I am trying to connect a Thunderbird client to my dovecot server. The dovecot is installed on Ubuntu. I know that my server works (at least partially), since when I send a mail to a user in the server ([email protected]), I see the new file created in /home/feedback/Maildir/new. However, when I try to connect with my Thunderbird to the server, It recognizes the server, but informs me that my user/password is wrong (they are not wrong). The exact message is: Configuration could not be verified - is the username or password wrong? The server configuration it tries to connect to is: incoming - IMAP 143, outgoing - SMTP 587 The dovecot configuration file is located here: dovecot.conf My PAM configuration is: @include common-auth @include common-account @include common-session In the log, I see: May 23 06: 07: 20 misfortune dovecot: imap-login: Disconnected (no auth attempts): ? rip=77.126.236.118, lip=184.106.69.153 Dovecot -n gives me: Log_timestamp: %Y-%m-%d %H: %M: %S Protocols: pop3 pop3s imap imaps Ssl: no Login_dir: /var/run/dovecot/login Login_executable(default): /usr/lib/dovecot/imap-login Login_executable(imap): /usr/lib/dovecot/imap-login Login_executable(pop3): /usr/lib/dovecot/pop3-login Mail_privileged_group: mail Mail_location: maildir: ~/Maildir Mbox_write_locks: fcntl dotlock Mail_executable(default): /usr/lib/dovecot/imap Mail_executable(imap): /usr/lib/dovecot/imap Mail_executable(pop3): /usr/lib/dovecot/pop3 Mail_plugin_dir(default): /usr/lib/dovecot/modules/imap Mail_plugin_dir(imap): /usr/lib/dovecot/modules/imap Mail_plugin_dir(pop3): /usr/lib/dovecot/modules/pop3 Imap_client_workarounds(default): tb-extra-mailbox-sep Imap_client_workarounds(imap): tb-extra-mailbox-sep Imap_client_workarounds(pop3): Auth default: passdb: driver: pam userdb: driver: passwd

    Read the article

  • Improve speed of "start menu" in Linux Mint 10 - Ubuntu 10.10 derivative

    - by Gabriel L. Oliveira
    I have a global menu (including application, administration and system tabs) that is taking too much time (for me) to load (about 2.5 seconds). Of course, this time is taken only during first start. After it have loaded, next times are better ( less than 0.2 miliseconds) The menu was taking more time before (about 5 seconds), and I found that was because of the 'Other' part of the menu, that included many applications installed with Wine, so I removed all of them (I didn't need them at all). I have a "normal" knowledge of programming, and I think that the process of starting the menu for the first time has some kind of "cache function", that tries to find which apps are present that need to be placed under menu to be shown to user. But didn't found this function so that I could analyze in details what he is doing (if searching for files under "~/.local/share/applications" or anything else). Also, I found that hitting "Alt-F2" also fires this "cache function", because after waiting it to load, the process of opening the menu took less than 0.2 miliseconds. So, could anyone help me in order to reduce this time? I found on internet that some user could reduce the time by resizing the icons of applications. But found here that most of my icons are already at 25x25 size. Any other idead? Maybe a multiprocess to load it, or include it under startup... don't know. Ps: Sorry if this is an awkward question, but I just do not like waiting for things to happen, and think that this process should be smoother than it's now. Also, thanks in advance!

    Read the article

  • Windows 2003 R2 zip program blocking EXE file

    - by Harvey Kwok
    I have a Windows 2003 R2 Enterprise Edition SP2 32-bit machine with all latest patch (as of 1-6-2011). It's a VM. I have a zip file, including a pdf file, a txt file and a exe file. If I copy the zip file onto the machine via a shared network drive, I can unzip all the files properly without problems. If I put the zip file on my web server and then I download it from there, I can only unzip the pdf file and txt file. The exe file is silently ignored. I searched the web and found somebody reporting similar issue on XP. If I right click on the zip file downloaded from the web server, at the bottom of the general page, it has a warning message saying that "This file came from antoher computer and might be blocked to help protect this computer" I understand that I can solve the problem by simply clicking the "Unblock" button and extract the file again. The things that bothering me is that why the warning message says "might be blocked"? I tried downloading the same zip file from the same web server on to my Windows 7 box with latest patch. It also shows the same warning message. However, even with the warning message, I can extract all the files properly without clicking the "Unblock" button. Is it a bug in Windows 2003 R2 SP1? Is there any security settings controlling this? How likely will the end user seeing this problem? I want to dig into this because I am worrying people downloading my zip file from my web server might see similar problems. The first thought coming to the user's mind will be the zip file is somehow corrupted. Honestly, I didn't know this "Unblock" feature in Windows before I run into this problem. EDIT I just tried it on another Windows 2003 R2 SP1 machine. The zip program doesn't block the EXE file on that machine either. Both Windows 2003 R2 SP1 machines are joining to the same forest.

    Read the article

  • GA 8KNXP Rev1.0: 4 GB installed, only 3.5 GB recognized by BIOS

    - by hurikhan77
    I've installed 2x 1 GB and 4x 512 MB memory into my GA-8KNXP system which would sum up to 4 GB. The specification from the manual says: Maximum memory support: 4 GB. If all six slots are utilized, slot 5+6 may only equipped with single-sided RAM modules. And so I did. Anyway: The BIOS counts up to 3.5 GB and finishes there. Also my Linux system reports only 3.5 GB of memory although 4 GB memory support is activated in the kernel. So I suppose this is a memory mapping issue or a hardware issue. I've tried removing only on of the 512 MB memory modules leaving 5 modules in place. But that just stopped the system from powering on correctly (screen stays black although fans and leds come to live). Dual Channel was detected and enabled so the system technically found all 6 modules. "dmidecode" in Linux reports only memory in slots 1 to 4 and ignores slots 5+6, so it only detects 3 GB of memory. It also says the system would support up to 16 GB of memory with 4 GB modules per slot. I think technically the chipset should be able to offer and utilize the complete 4 GB memory range. Any clues what else I could check? Or do I have just to live with 0.5 GB wasted memory?

    Read the article

  • Remote desktop type software that the client need not install anything...

    - by allentown
    I am primarily a Macintosh user, and can usually walk a client though any troubles they may have because I have a Macintosh in front of me. If they are on a different OS, things are close enough, or I cam remember, that I can get by. When trying to help clients on Windows, I get stuck. I do not have access to windows, and even if I did, there are far too many versions of Outlook, all with their various esoteric settings and checkboxes, that I could never see exactly what they are seeing. I mostly need to just help them with email setup. Something like copilot.com may do the trick. What is the simplest remote control software out there, ideally, it would accomplish these: No software needed on remote end, or, a single .exe that they can toss when done. I need Mac based software on my end. I do have ARD, which support VNC Free :) If possible, it would be really nice Needs a port forwarding proxy run by the company. There is no way I can get the user to alter their router, or to even plug directly into their WAN for a short time. On the Mac, I just have them open iChat, and this is all built in, proxying through AIM, looking for the same for Windows and Mac.

    Read the article

  • Can I autoregister my clients/servers in local DNS?

    - by Christian Wattengård
    Right now I have a W2k12 server at home that I run as a domain controller. This has the extra benefit of registering every "subordinate" computers name in it's DNS so that I don't have to go around remembering IP's all the time. (And it let's me easily run dhcp also on my servers). I need to rework my home network for several odd reasons, and in this new scenario there is no place for a big honking W2k12 server box. I have a RasPI, and I have other smallish linux boxen I can use. (In a worst case scenario I'll use my NUC, but then I'll be forced to use my home cinema's UPnP-client for media... The HORROR!!) Is it possible to set up a DNS-server-"appliance" that somehow autoregisters it's own hostname.. Scenario: Router (N66u) on 172.20.20.1. Runs DHCP on 172.20.20.100-200 range. Server [verdant] of a *nix flavor on 172.20.20.2 Laptop [speedy] of W8 flavor on DHCP assigned Laptop [canary] of W8 flavor on DHCP assigned Desktop [lianyu] of Ubunto flavor on DHCP assigned What I would like is that all of the above servers (except possibly the router) would be available on verdant.starling.lan and canary.starling.lan and so on. This is how it works right now (except the Ubuntu box... I haven't cracked that one yet) because Windows just does this for you.. I would also be able to do this without any manual labor on the server. When I tell my box it's name is smoak it should "immediately" be available as smoak.starling.lan without any extra configuration on my part. How can I do this in a Linux (Ubuntu) environment? (Bonus comment upvote for naming the naming scheme :P )

    Read the article

  • Network problems with Ubuntu on VMware

    - by svick
    I'm running Ubuntu 10.04 inside VMware Player running under Windows Vista and I can't connect to the internet or the host computer from the Ubuntu. I have set all the VMware services to “manual” (like VMware DHCP Service), but starting them manually doesn't help. In VMware, the network seems to work (there is a green dot beside the network icon) and I have tried both Bridged and NAT settings. ifconfig doesn't show the eth1 interface, unless I give it as a parameter (or use -a). I think this means that Ubuntu thinks that the network isn't connected at all. How do I fix this? ifconvadmin@ubu1004:~$ ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:56 errors:0 dropped:0 overruns:0 frame:0 TX packets:56 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:4192 (4.1 KB) TX bytes:4192 (4.1 KB) vadmin@ubu1004:~$ ifconfig eth1 eth1 Link encap:Ethernet HWaddr 00:0c:29:2d:a0:6f BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:18 Base address:0x2000

    Read the article

  • Error while installing boost_1_54

    - by Farhat
    On trying to install boost I get this error during configuration checks. Googling did not give any pointers. [root@heracles boost_1_54_0]# ./b2 install Performing configuration checks - 32-bit : no (cached) - 64-bit : yes (cached) - arm : no (cached) - mips1 : no (cached) - power : no (cached) - sparc : no (cached) - x86 : yes (cached) error: No best alternative for libs/coroutine/build/allocator_sources next alternative: required properties: <link>static <target-os>windows <threading>multi not matched next alternative: required properties: <link>static <segmented-stacks>on <threading>multi not matched next alternative: required properties: <link>static <threading>multi not matched - has_icu builds : no (cached) warning: Graph library does not contain MPI-based parallel components. note: to enable them, add "using mpi ;" to your user-config.jam - zlib : yes (cached) - iconv (libc) : yes (cached) - icu : no (cached) - icu (lib64) : no (cached) - compiler-supports-ssse3 : yes (cached) - compiler-supports-avx2 : no (cached) - gcc visibility : yes (cached) - long double support : yes (cached) warning: skipping optional Message Passing Interface (MPI) library. note: to enable MPI support, add "using mpi ;" to user-config.jam. note: to suppress this message, pass "--without-mpi" to bjam. note: otherwise, you can safely ignore this message. error: No best alternative for libs/coroutine/build/allocator_sources next alternative: required properties: <link>static <target-os>windows <threading>multi not matched next alternative: required properties: <link>static <segmented-stacks>on <threading>multi not matched next alternative: required properties: <link>static <threading>multi not matched - zlib : yes (cached) How can the alternative for allocator sources be located? Thanks.

    Read the article

  • External USB HD with -optional- mains?

    - by Stephen
    Hi, I'm Christmas-present-buying, and I'd appreciate recommendations for a USB HD with an optional mains power input. I've hunted, but can't find all the information I want (partially due to sketchy product specifications). Background: This is for a digital TV which I do not own, and so I'd like to get it correct first time. The TV has a USB port to allow recording straight to disk, but the manuals don't say how much power can be drawn through the USB port. The manual's instructions state, possibly generically, to plug the drive in before connecting to the TV. Ideally I'd like a small (2.5"?) drive which can draw power over USB, with an mains power input if it turns out the USB port on the TV doesn't offer enough juice. The ideal is to use one cable, two max. A powered USB hub would introduce too much clutter. I've spotted that the LaCie Petit drives have what appears to be an additional power input, but I'm not even sure from the specs what that is. And the device doesn't ship with a mains adapter. Suggestions?

    Read the article

  • Internal/External Moodle - DNS

    - by Chief17
    Network diagram: I have a moodle (a VLE) setup that I want to be internally and externally accessible. The green route on the diagram below is the route I would like the traffic to take when the user is inside the LAN, and the red route is seemingly what it does take. The website has a domain name (like most websites do). From the User PC, if I ping the domain name, I get the internal IP of the webserver (because of a hosts file entry), if I nslookup the domain name I also get the internal IP of the webserver (because of an A record on my DNS server). Running the same two commands on the webserver gives me the webservers external IP. (going well so far) If I use PHPs gethostbyname() on the moodle website and use domain name as a parameter (getting php/apache to resolve the hostname) it returns the exernal IP of the webserver (good news, DNS seems to be doing what I want it to). All things so far seem to be going well. The only thing that is confusing me and preventing the moodle single sign on from working is the fact that if I get moodle to show my IP address, it says that it is an external one (outside my NATting firewall) when it should show an internal IP. This is the issue, any ideas on how to go about resolving this? Any ideas on tests I can perform (I have also tried a tracert and the request goes directly to the webserver), anything? Thanks all!

    Read the article

  • Excel 2007 - "The macro may not be available in this workbook" Error

    - by Psycho Bob
    We use an Excel sheet that has been protected to prevent modification of it from end users. All in all they are only able to edit certain tabs to add information that will then be used to generate information on other tabs using equations and such. On the tab with the equations, a button is present called "Prep for Internal Hard Copy Print." This button runs a macro that selects the information on the tab, unprotects it, then sends a print job to the user's default printer that contains the unprotected content. Normally this works like a champ. This time around, however, the macro is throwing the following error: Cannot run the macro "FILENAME.xlsx'!MacroName'. The macro may not be available in this workbook or all macros may be disabled. As far as I can tell, the macros are still present within the workbook. This sheet is normally a .xlsm though the user saved it with a different filename as a .xlsx. Also, the macros appear only as MacroName in the .xlsm file and not "FILENAME.xlsx'!MacroName' as it does in the .xlsx. Finally, when I open the .xlsm it asks if I want to enable the macro content while the .xlsx does not prompt for this. Can anyone tell me what's going on with this sheet or know of a way that I can get the macros working in the .xlsx without having to start over with a different sheet?

    Read the article

  • SkyDrive broken after upgrade to Windows 8.1: "This location can't be found, please try later"

    - by avo
    Upgrading from Windows 8 to Windows 8.1 via the Store upgrade path has screwed my SkyDrive. The C:\Users\<user name>\SkyDrive folder is empty (it only has single file desktop.ini). When I open the native (Store) SkyDrive app, I see "This location can't be found, please try later". I'm glad to still have my files alive online in my SkyDrive account. I tried disconneting from / reconnecting to my Microsoft Account with no luck. Anyone has an idea on how to fix this without reinstalling/refreshing Windows 8.1? From Event Viewer: Faulting application name: skydrive.exe, version: 6.3.9600.16412, time stamp: 0x5243d370 Faulting module name: unknown, version: 0.0.0.0, time stamp: 0x00000000 Exception code: 0x00000000 Fault offset: 0x0000000000000000 Faulting process ID: 0x4e8 Faulting application start time: 0x01cece256589c7ee Faulting application path: C:\Windows\System32\skydrive.exe Faulting module path: unknown Report ID: {...} Faulting package full name: Faulting package-relative application ID: Also: The machine-default permission settings do not grant Local Activation permission for the COM Server application with CLSID {C2F03A33-21F5-47FA-B4BB-156362A2F239} and APPID {316CDED5-E4AE-4B15-9113-7055D84DCC97} to the user NT AUTHORITY\LOCAL SERVICE SID (S-1-5-19) from address LocalHost (Using LRPC) running in the application container Unavailable SID (Unavailable). This security permission can be modified using the Component Services administrative tool. Never was a big fan of in-place upgrade anyway, but this time it was a machine which I use for work, with a lot of stuff already installed on it. Shouldn't have tried to upgrade it in the first place, but was convinced Windows 8.1 is a solid update. Another lesson learnt.

    Read the article

  • How do I hook into Tar with BASH?

    - by orb
    Long Story Short I am working with Tar archives that contain PNG images in base64 encoding. I would like to use BASH (or whatever else works) to hook into the extraction function of Tar to decode PNG images from base64 encoding to standard PNG encoding after the files are unpacked. A simple cat $input-file | base64 -d >$output-file will successfully decode the images. Is there a way I can hook into tar -xf so that users do not have to do any (or minimal) extra work to decode the images? In the GNU Tar documentation (http://www.gnu.org/software/tar/manual/html_chapter/Backups.html#SEC97) I found that there are in fact variables reserved to hold the names of functions I desire to be hooked into various moments in Tar program execution. However, the documentation explains that these variables, along with other variables that can be set to configure Tar, are located in a file named backup-specs. Unfortunately, the path to this file is not given. Further, running sudo find / -name backup-specs tells me that this file is not present on my Ubuntu version 13.04 system. Background Information not included in the Long Story Short I have been working on a browser-based (WebGL) particle effect creation application (http://www.particleeffect.org), (https://github.com/cgrabowski/webgl-particle-effect-editor), (https://github.com/cgrabowski/webgl-particle-effect). I have began to write a client-side-only solution for saving and loading effect data as a tar archive. However, since client-side JavaScript has limited capability to process binary data, the images used as textures in the effect are saved with base64 encoding. I have been able to implement saving effect data as a Tar archive (haven't pushed that to Github yet). However, the images present in said Tar archive cannot be manipulated unless they are decoded from base64 encoding.

    Read the article

  • XP SP2 Event log not logging events

    - by Weedfreer
    I have a problem whereby a terminal appears not to be logging events correctly and occasionally appears to have problems communicating accross the network.The terminal has previously been infected with a virus which apears to have 'played' with the default group policy in the standard user profile. Although, outwardly, the terminal appears to be working normally I still have a nagging feeling that it isn't quite back to the way it was. It was infected by a user plugging in a USB Stick while the company was using the older version of the AV software...typically a week or so before it was updated.I have configured the Event logs to Overwrite as required and to be 5056KB in Maximum size. I have also attempted:- Disabling the Event Log service & restarting Renewing the EVT files in Windows\system32\config directory Restarting the event log service and restarting Clearing the event log in the Services MMC Resetting the Filters to Default in the services MMC Using the EVENTCREATE command remotely from a CMD window on the server to force an event creation event. So far the only operation to have any sort of success is the remote computer EVENTCREATE command from a CMD window on the server. As it stands, the only other time that the computer has managed to create events is while it is being restarted.Has anyone gotany ideas on how to proceed? I'm thinking that possibly a refresh of the 'Windows\system32\config\SystemProfile' folder. I'm also thinking about running a tool such as Malwarebytes but this could be slightly controvertial as the system needs to be running on 'up-time' for as long as possible. I'm also wonderign whether anyone knows of any Windows admin tools that allow me to control the event logging options or default security options so that i could get it back to some sort of standard.What I'm trying to avoid is a complte re-imaging of the terminal. Although this is an option, I dont really want to have to take it if i dont need to.Many thanks in advance for any suggestions anyone may be able to provide.

    Read the article

  • Excel Help: Data Input Help

    - by B-Ballerl
    Everyday I download data from a site that will have rows each filled with individual data for clients. I'm able to input the data into excel as a whole but after that I'm having trouble figuring out how to put it into a chart. For example Web visits time. So say Client 1 stayed for 5 min increasing his total time on the site to 20 min and Client 2 stayed for 0 min keeping his time of 10 min and they were both registered on new years eve, and R1's last login was today and R2's was yesterday. (R for some reason repersents Client, no idea why...). Client 3 hasn't been on since he registered keeping his total at 4 min So my data would look something like this for Today (20110104) R1,20101231,20110104,20 R2,20101231,20110103,10 R3,20101231,20101231,4 And this for the day before (201101030), R1,20101231,20110102,15 R2,20101231,20110103,10 R3,20101231,20101231,4 I get about 200+ client rows each day where even the names of the Client list are changing. Is it possible to import the data each day and fill it in a excel sheet where the Client number is off on the left hand side in a table, and the amount of time (Whole Number ex. 4) each day it spends on the site extend to the right under it's specific date see Picture? I've manage to create a manual sheet but have been unsucessful at getting excel to do any of it for me. Here are two pictures:

    Read the article

  • pppd disconnects from 3G, doesn't reconnect, w/ persist set

    - by bytenik
    I am trying to configure pppd to connect to a 3G network (Sprint, in this case) and then stay connected, reconnecting automatically if the remote connection is terminated. I have enabled the persist option. My configuration file is as follows: hide-password noauth connect "/usr/sbin/chat -v -f /etc/chatscripts/cellular" debug /dev/cell 921600 defaultroute noipdefault user " " persist maxfail 0 lcp-echo-failure 10 lcp-echo-interval 60 holdoff 5 However, when the peer disconnects the connection, pppd often waits a long time (substantially more than my holdoff) to reconnect the modem -- if it ever reconnects at all! An example log showing this: May 23 05:17:24 00270e0a8888 pppd[2408]: rcvd [LCP TermReq id=0x26] May 23 05:17:24 00270e0a8888 pppd[2408]: LCP terminated by peer May 23 05:17:24 00270e0a8888 pppd[2408]: Connect time 60.1 minutes. May 23 05:17:24 00270e0a8888 pppd[2408]: Sent 0 bytes, received 0 bytes. May 23 05:17:24 00270e0a8888 pppd[2408]: Script /etc/ppp/ip-down started (pid 2456) May 23 05:17:24 00270e0a8888 pppd[2408]: sent [LCP TermAck id=0x26] May 23 05:17:24 00270e0a8888 pppd[2408]: Script /etc/ppp/ip-down finished (pid 2456), status = 0x0 May 23 05:17:24 00270e0a8888 pppd[2408]: Hangup (SIGHUP) May 23 05:17:24 00270e0a8888 pppd[2408]: Modem hangup May 23 05:17:24 00270e0a8888 pppd[2408]: Connection terminated. May 23 05:17:24 00270e0a8888 pppd[2408]: Terminating on signal 15 May 23 05:17:24 00270e0a8888 pppd[2408]: Exit. May 23 06:08:07 00270e0a8888 pppd[2500]: pppd 2.4.5 started by root, uid 0 May 23 06:08:10 00270e0a8888 pppd[2500]: Script /usr/sbin/chat -v -f /etc/chatscripts/cellular finished (pid 2530), status = 0x0 May 23 06:08:10 00270e0a8888 pppd[2500]: Serial connection established. May 23 06:08:10 00270e0a8888 pppd[2500]: using channel 11 The disconnect at the request of the peer occurs at 5:17, but the reconnect didn't happen until 6:08. I had a friend monitoring the server so I'm not certain that this wasn't a manual reconnection. Either way, it either took almost an hour to reconnect or never reconnected. Shouldn't persist + holdoff 5 cause this to automatically reconnect after 5 seconds of the link terminating?

    Read the article

  • How can I restore "Open With" context menu item in Windows 7?

    - by Izzy Helianthus
    I tried various way to fix this problem but ended up with a dead end. My problem would be the missing "Open With" context menu items (or subitems?). It did not appear even though I hovered it for a minutes or two. Below is a screenshot of the respective right-click menu. Note: The only problem with "Open With" is at the right-click menu (as well as FILE menu). Edited: The "Open With" context submenu that only accessible at the top, while the typical right click menu doesn't work. Repaste from Comment. I don't think it's involved with any windows files because other user in the same computer doesn't affected at all. I can see the "Open With" context submenu. I believe this must have involved with current user's registry. It happens to all files (any file types, except folder). I can only use Open With by clicking at the file and select it manually at the top of Explorer window. (Refer to the link for the screenshot)

    Read the article

  • Setting MSN or Yahoo! Messenger status to Invisible or Offline when idle for an hour

    - by Jian Lin
    Where, or how, do I set it up in MSN Messenger or Yahoo! Messenger to automatically switch my status to either "Invisible" or "Offline" when idle for a half hour, or an hour? I know how to set my status as "Away" or "Busy" after 10 minutes, but can't seem to find a way to set the offline status options without manual intervention. Back story As a software developer, I am very used to turning the computer on for the whole day and not turning it back off. (For example, checking email for urgent fixes, fix issue and push to web server). It's not even turned off when heading to sleep in case I might find it hard to fall asleep and come back to check on the computer. Or to have it there ready in the morning to check that everything is okay. If I'm seen as being online for 24 hours of a day, some people see me as weird. Their perception of my value decreases as I'm always there (hard to get = high value; always there = low value). Leaving it on makes everyone in my contacts list think I have nothing better to do all day than sit in front of the computer. Even though it's my job and I do admittedly spend more time online than other people. That's why I'd like to find a way to set my status as Invisible or Offline.

    Read the article

  • How to disable irritating Office File Validation security alert?

    - by Rabarberski
    I have Microsoft Office 2007 running on Windows 7. Yesterday I updated Office to the latest service pack, i.e. SP3. This morning, when opening an MS Word document (.doc format, and a document I created myself some months ago) I was greeted with a new dialog box saying: Security Alert - Office File Validation WARNING: Office File Validation detected a problem while trying to open this file. Opening this is probably dangerous, and may allow a malicious user to take over your computer. Contact the sender and ask them to re-save and re-send the file. For more security, verify in person or via the phone that they sent the file. Including two links to some microsoft blabla webpage. Obviously the document is safe as I created it myself some months ago. How to disable this irritating dialog box? (On a sidenote, a rethorical question: Will Microsoft never learn? I consider myself a power user in Word, but I have no clue what could be wrong with my document so that it is considered dangerous. Let alone more basic users of Word. Sigh....)

    Read the article

  • How to link to a subfolder of a share?

    - by Nicolas Raoul
    On my Windows XP server, a folder called Share2 is shared. It contains a subfolder called folder3. The guest account is protected by a password, which means network users have to type the guest password to access it. When a user types \\server\Share2 in his file explorer, he is prompted for a password. When a user types \\server\Share2\folder3 in his file explorer, an error appears. He is not even prompted for a password. This is problematic because I want to link to this particular folder. How can I link to folder3? Notes: - Both Desktop shortcuts and HTML links in IE7/8 give an error if I link to folder3, but work if I just link to Share2. - Using the file:// syntax instead of the \\ syntax leads to the same results. - Password setting per http://www.lancelhoff.com/how-to-password-protect-a-shared-folder - Not using "Simple File Sharing" - The error message is ???????????????????????? which means "could not find it. check the path and try again". No English Windows around to try, sorry! It is easy to reproduce the problem though, so can anyone post the English error message for the sake of searchability? Thanks!

    Read the article

< Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >