Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 707/1300 | < Previous Page | 703 704 705 706 707 708 709 710 711 712 713 714  | Next Page >

  • VirtualBox management interface unreliability

    - by Arlen Cuss
    I'm using VirtualBox 3.2.8_OSE with 20 VMs running, and everything's going fine. I find that if I hammer the VBoxManage interface, all sorts of interesting things happen, usually necessitating either a restart of the VM in question, or of all VMs. For instance, if I use VBoxManage guestcontrol execute to run processes, after a few hours of using it maybe once or twice a minute on any given VM, it'll mysteriously start reporting VERR_NOT_IMPLEMENTED and refusing to do anything—sometimes trying to restart /usr/sbin/VBoxService on the VM itself will get it back in working order, but often it won't, and in the meantime, no data can be collected using VBoxManage. Such data includes the VM's IP, so if I hadn't recorded it earlier, I'm usually in trouble and have no option but to portscan the network for it, or kill the VM's process on the host manually and restart it. This one I haven't narrowed down yet, but it seems that even using VBoxManage guestproperty get (to retrieve a machine's IP) frequently and rapidly is enough to cause all VMs' management interfaces to die. The processes are still running fine, but VBoxManage reports them all as "powered off". In the meantime, another process somewhere in the system seems to have decided that their being powered off means they need to be powered on again, and suddenly I have 2x the number of VBoxHeadless processes running than I used to. Has anyone else seen behaviour like this? Is there any workaround? This is a serious impediment to my work, as I've had to resort to a lot of (hacky) caching of data and rate-limiting how often I call VBoxManage, just in case I accidentally bring 20 VMs to their knees.

    Read the article

  • MySQL command appends '@localhost' to username

    - by Mikee
    I just can't seem to figure this one out. I want to use the command line to connect to a MySQL database residing on another server. I went ahead and created the username and password for the user. I have also granted all privileges on that user for that database. When using the command: mysql -h <hostname> -u <username> -p, I get the following error: ERROR 1045 (28000): Access denied for user '<username>'@'<local_machine_hostname>' (using password: YES) The problem is that it keeps appending the current machine's hostname into the username. Obviously, that user@<local_machine_hostname> is not correct. It doesn't matter what I type. For instance, if I type: mysql -h <hostname> -u '<username>'@'<hostname>' -p It does the same, only in the error output, it says: Access denied for user '<username>@<hostname>'@'<local_machine_hostname>' Is there a setting in a configuration file which is allowing this to happen? It's really quite annoying. I need to set up a tikiwiki server, and it cannot connect because during the step where you set up mysql, it keeps appending the local machine's hostname to the mysql login name.

    Read the article

  • make pentaho report for ubuntu 11.04

    - by Hendri
    currently i'm trying to install Pentaho Reports for OpenERP which is refer from https://github.com/WillowIT/Pentaho-...rver/build.xml i ever installed on some laptops which is Windows Based and it's working, but currently i'm trying on UBuntu 11.04 OS, it prompted me error like this "error build.xml:18: failed to create task or type.." below is the steps i did : 1. install java-6-openjdk comment : "apt-get install java-6-openjdk" then i set installed java jdk into java_home environment command: "nano /etc/environment" add environment with this new line : JAVA_HOME="/usr/lib/jvm/java-6-openjdk" I install apache ant command : "apt-get install ant" followed by setting the evnironment command: "nano /etc/environment" add environment with this new line : ANT_HOME="/usr/share/ant" try to check installation with command "ant"... I get message like this: Buildfile: build.xml does not exist! Build failed then download java server from https://github.com/WillowIT/Pentaho-...rver/build.xml and then copied to Ubuntu share folder and then on command form, i goto extracted path which is share folder i mentioned and executed command "ant war " and i got error message : BUILD FAILED /share/java_server/build.xml:18: problem: failed to create task or type antlibrg:apacge.ivy.ant:retrieve cause: The name is undefined. Action:Check the spelling. Action:Check that any custom taks/types have been declared Action:Check that any /declarations have taken place. No types or taks have been defined in this namespace yet This appears to be an antlib declaration. Action:Check that the implementing library exists in one of: -/usr/share/ant/lib -/root/.ant/lib -a directory added on the command line with the -lib argument Total time:0 seconds is there any compability issue? or i miss out some steps? i'm in the some project to rush with for reporting, so please help me to solve this issue i look forward to your corporation to help me solve this issue, thanks a lot in advance Thx Best Regards,

    Read the article

  • MySQL ADO.NET Connector & MSSQL Integration Services

    - by user1114330
    Here I am, day three... attempting to sync a data view on a Windows Vista box (64 bit) running MSSQL 2012 and Visual Studio 2010. Sanity is slipping and hunger for progress fills my attention. I went through hell trying to get the MySQL ODBC drivers to get the job but to no avail...everyone seems to be lost and all the threads I can find are solutions that do not work for me. The problem: System DSN's not being seen by SSIS. SSIS DSN Not Showing as ODBC Data Source I make the decision to try out the ADO.NET connector...and to my surprise it is actually in the selection list in data sources in SSIS. So I take off running to create a Data Flow Task, create an ADO.NET Source (a local MSSQL DB)...all is good as usual. Then I move swiftly to creating a ADO.NET Destination, enter my credentials...wow, I am selecting a database finally on my linux server! Happy thinking that I finally have figured a way to get the job done. Then I move to mappings...nope, something is wrong...I am getting an error that hurts my eyes: Pipeline component has returned HRESULT error code 0xC0208457 from a a method call. Error at Data Flow Task [ADO NET Destination [81]]: Failed to get properties of external columns. The table name you entered may not exist or you do not have SELECT permission on the table object and an alternative attempt to get column properties through connection has failed. Detailed error messages are" You have an error in your SQL syntax check the manual that corresponds to your MySQL server version for the right syntax to use near "database".tablename" at line 1. The descriptor files on path C:\Program Files (x86)\Microsoft SQL Server\110\DTS\ProviderDescriptors\ does not contain schema information for connection of type MySQL.Data.MySqlClient.MySqlConnection. So it looks like it can't the information and therefore I cannot map the tables properly. Any ideas on this would be ultra helpful...thanks in advance to All!

    Read the article

  • After adding skip-innodb mysql doesn't start

    - by Pentium10
    I am trying to setup these values: #skip-bdb #skip-locking #skip-innodb When I add them to /etc/mysql/my.cnf and even if I turn ON of of, them after I do the service restart mysql fails to start, and no error message printed. sudo service mysql restart [ ok ] Stopping MySQL database server: mysqld. [FAIL] Starting MySQL database server: mysqld . . . . . . . . . . . . . . failed! Previously I made sure that I have no InnoDB tables, and all files of that type were removed. I tried looking for error files but I couldn't locate it: /var/log/mysql.err is a 0 byte file /var/log/mysql folder has no files rsyslog was changed in past with inetutils-syslogd, and this might have changed the log files, and it could be the reason why I don't see any error logs, and I am stuck how to look or go forward.

    Read the article

  • Windows 7 access denied to executables.. by what?

    - by stijn
    Ever since I started using Windows 7 this problem has been bothering me. From time to time I see similar questions popping up on misc forums, but never did I see an answer. Here are two scenarios that nearly always reproduce it: the explorer way with explorer, navigate to a directory containing at least one exe file go one directory up immediately delete the directory just navigated to yields Folder Acces Denied dialog stating You need permission to perform this action You require permission from Administrators to make changes to this folder, with the buttons try Again and Cancel hitting Try Again never works immediately. Waiting a minute or so and then clickig it again does work note: if in step 2 and waiting a minute or more before going up one directory, the problem does not occur and the folder can be deleted the visual studio way build a project producing an exe file run the executable then close it immediately build the project again (by changing a single character in a source file for example) yields fatal error LNK1168: cannot open /path/to/the.exe for writing note: if in step 2 and waiting a minute or more before building again, the problem does not occur some specs happens both on Windows 7 32 and 64 bit, with VS2008/2010/2011 happens on 3 different machines I do not have a virusscanner of any kind I do have a bunch of services disabled, but nothing that prevents Windows from running normally, UAC is disabled as well happens on any type of disc I always use a user account that is in the Administrators group Obviously both scenarios are very similar and extremely reproducable. So I figured some process must have the file open for some reason, and release it again later. However, using systinternal's handle -a the exe file in question never shows up. (that is the correct way to use handle, right?) So while explorer/VS are reporting they cannot access the file, handle.exe says it's not in use anywhere. This leaves me rather clueless, so I'm wondering if someone can come up with a solution: why does this happen, and how to solve it?

    Read the article

  • Backup Picasa 'people' tags data

    - by pelms
    OK, so I've spent a fair amount of time putting names to faces in Picasa 3.5 but in a few days (hopefully) my copy of Windows 7 should arrive and I'll need to reinstall Windows. So, does anyone know what I need to backup so that I don't have to re-enter all those name tags? N.B. I'm on Windows 7 RC and know that I don't have to do a clean reinstall but I would prefer to. Outcome: I clean installed Windows 7 and downloaded and installed Picasa. Unfortunately, the download link on the UK Picasa homepage still pointed to Picasa 3.0 (rather than 3.5) which doesn't have face recognition. This scanned my photos folders and overwrote the picasa.ini files along with the people information   :¬( Fortunately I'd backed up the photos before installing Win 7, so after uninstalling Picasa 3.0 (along with it's database), restoring the photos from backup and installing Picasa 3.5, I finally got my face names back. Extra... Google has now posted advice on how to migrate to Windows 7 and keep your Picasa database, meaning that it will not need to rescan you photos and will retain all information about then including name tags. They have a method for upgrading and for a clean install of Win 7. Basically you need to back up: "C:\Users\%username%\AppData\Local\Google\Picasa2" and "C:\Users\%username%\AppData\Local\Google\Picasa2Albums"

    Read the article

  • How to transform a csv to combine matching rows?

    - by Christian Wolf
    I have a CSV file with some transaction data. Let's say date, volume, price and direction (sell/buy). Additionally there is a ID for each transaction and on each closing transaction (the newer one) there is a reference to the corresponding transaction. Classical database referencing. Now I want to do some statistics and draw some plots. This could be done via Octave, LaTeX/TikZ, Gnuplot or whatever. To do this I need both buy and sell price in one row. My thought was to preprocess the CSV to get another CSV containing the needed information and then to do the statistics. In the end I'd like to have a solution based on scripts and not on a spreadsheet as data might change often (exported from online DB). My actual solution (see http://paste.ubuntu.com/6262822/ ) is a bash script that parses the CSV line by line and checks if there exists a corresponding transaction. If found, a new row is written to the destination CSV. If not a warning is printed. The bad news: For each row in the source file I have to read the whole file a few times. This causes long running times of 10sec for 300 lines. As the line number might rise soon (10k lines), this is not perfect. I am aware, that there are many shells to be opened in the script which might cause the performance problems. Now my questions: Is bash/awk/sed/.... a good way to do things? Should I first import all data into a "real" local database to use SQL? Is there an easy way to achieve the desired results?

    Read the article

  • Strange mysql problem moving website from Ubuntu server to Mac server

    - by evan
    I'm moving a website (php/mysql) from an Ubuntu server to a OSX 10.6 server. I've set up apache to run php scripts and setup the newest version of mysql on the mac. I just copied all of the php files and dumped/imported all of the mysql databases (including the mysql users database). When I visit the page being served on the Mac the page is able to connect to the database, but not query. Specifically this function mysql_error() is returning this error message NO SUCH FILE OR DIRECTORY The reason it's strange is that I'm able to change the php connection strings for mysql on the Ubuntu server so that it points to the Mac server and the page works correctly (so it seems mysql is correctly set up on the mac and definitely contains all of the users and tables it should). Thinking it was something to do with file permissions on the mac, I've changed all of the files 755 and it hasn't helped. Any ideas? Thanks!! UPDATE: I've found this error which I'm relatively certain is related to this problem in /var/log/apache2/error_log PHP Warning: mysql_query(): A link to the server could not be established

    Read the article

  • SQL Server Subscriber Migration

    - by SuperCoolMoss
    We're currently have one way transaction replication from a SQL Server 2005 OLTP publisher/distrbituor to two subscribers (one SQL 2005 and the other SQL2008 R2). Replication security is via the SQL Agents' domain service account (the same account is used on all boxes). The SQL2008R2 subscriber is used for BI purposes and hosts a database that has a subset of the Production publisher database tables, with different security and indexes. We need to migrate this BI subscriber to a newer box with more performant hardware. The plan is as follows: Stop replicating to the BI box (continue replicating to the other subscriber). Backup all databases on the BI box (including system databases). Restore all databases (including master in single user mode) to the new BI box (this has SQL Server 2008R2 already installed). Take the old BI box off the network and shut it down. Rename and Re-IP the new BI box to be the same as the old box. Switch replication back on. Are there any flaws in this approach?

    Read the article

  • When a server IP changes, do exising TCP (e.g. http/mysql) connections remain running

    - by Luke Cousins
    We have some PHP-FPM servers and when they need a database connection, they connect to an HAProxy server which selects them a database server to use and the connection opens. When we then want to perform some maintenance on the HAProxy servers (such as config changes requiring an HAProxy restart), the process is as follows: Shutdown Keepalived on the HAProxy server Wait for the floating IP to transfer to the backup HAProxy server (also running Keepalived) Wait until HAProxy stats is reporting just one connection (us checking how many connections there are) Restart HAProxy Restart Keepalived As step 2 occurs, what will happen to the open mysql connections at that point? According to this TCP Sessions and IP Changes question the connections will be dropped. Is this really the case? If so, what, if anything, can be done to prevent this happening? Can the connection be in some way forced to use the main (non-floating) IP of the server? We also have a similar setup with two Nginx servers with Keepalived running on them and we were planning on doing the equivalent process. If we do, the same question applies - what happens to the existing http connections when the IP moves to the other server? I appreciate your help.

    Read the article

  • Configuring MySQL for Power Failure

    - by Farrukh Arshad
    I have absolutely no experience with databases and MySql. Now the problem is I have an embedded device running a MySQL database with a web based application. The problem is when I shutdown my embedded device it just cut off the power, and I can not have a controlled shutdown. Given this situation how can I configure MySql to prevent it from failures and in case of a failure, I should have maximum support to recover my database. While searching this, I came across InnoDB Engine as well as some configuration options to set like sync_binlog=1 & innodb_flush_log_at_trx_commit=1. I have noticed my default Engine is InnoDB and binary logs are also enabled. What are other configurations to make for best possible failure & recovery support. Updated: I will be using InnoDB engine which supports Transactions. My question is how best I can configure it (InnoDB + MySQL) so that it can provide best possible fail-safe as well as crash recovery mechanism. One configuration option I came across is to enable binary logging which InnoDB uses at the time of recovery. Regards, Farrukh Arshad

    Read the article

  • How to connect to DB2 when the password ends with '!' in Windows

    - by AngocA
    I am facing a problem to use the DB2 tools when using generic account with a generated password which ends with the Bang sign '!' to connect to DB2 database. I am not allowed to change the password because it is already used by other processes. I know the user is valid and I can connect to the database with its credentials, but not from all db2 tools. When using the Control Center it is okay. When using the Command Editor (GUI) or the Command Windows, I got this error message: connect to WAREHOUS user administrator using ! SQL0104N An unexpected token "!" was found following "<identifier>". Expected tokens may include: "NEW". SQLSTATE=42601 Let's say that my password is: pass@! I am trying to use c:\>db2 connect to sample user administrator using "pass@!" or c:\>db2 connect to sample user administrator using pass@! And it both cases I got the same error message. I could change the way I connect but it is not useful for me, for example: c:\>db2 connect to sample user administrator Enter current password for administrator: But I cannot use it from a batch file easily. I would like to know how can I connect from the Command Editor, in order to use this user from the Graphical Tools. BTW, I know that the Control Center is deprecated.

    Read the article

  • Amazon EC2: Instances, IPs and a wordpress blog (LAMP)

    - by JustinXXVII
    I had a link to my blog posted on Reddit yesterday and MySQL crashed on my EC2 Micro instance. I know I didn't have that many visitors because I used a marketing link that tracks hits. The link got 167 hits over the course of the last 18 hours, and MySQL crashed twice. So anyway, 167 visits is not a lot, so I've done some short term optimizations like restricting the number of Apache threads to limit the MySQL calls. I also set up WP Super Cache to serve static content. Soon I'm going to offload all of my images to S3 or CloudFront. So this leads me to my question. If this doesn't seem to help, and if i have another traffic "spike", how do AMIs work when you have a MySQL database? I think I understand that if you have more than one instance and assign the same Elastic IP to both of them, the incoming traffic gets distributed among both. But what happens when the MySQL database gets updated on one of the instances? I just need to wrap my mind around what happens when I create an AMI and then launch a new instance to help with traffic. Thanks for your suggestions.

    Read the article

  • SQL Server 2000, large transaction log, almost empty, performance issue?

    - by Mafu Josh
    For a company that I have been helping troubleshoot their database. In SQL Server 2000, database is about 120 gig. Something caused the transaction log to grow MUCH larger than normal to over 100 gig, some hung transaction that didn't commit or roll back for a few days. That has been resolved and it now stays around 1% full or less, due to its hourly transaction log backups. It IS my understanding that a GROWING transaction log file size can cause performance issues. But what I am a little paranoid about is the size. Although mainly empty, MIGHT it be having a negative effect on performance? But I haven't found any documentation that suggests this is true. I did find this link: http://www.bigresource.com/MS_SQL-Large-Transaction-Log-dramatically-Slows-down-processing-any-idea-why--2ahzP5wK.html but in this post I can't tell if their log was full or empty, and there is not any replies to the post in this link. So I am guessing it is not a problem, anyone know for sure?

    Read the article

  • My yum repository able to search packages, but not able to install it in RHEL?

    - by mandy
    I set up yum from dvd. Following is the containts of my .repo file: [dvd] name=Red Hat Enterprise Linux Installation DVD baseurl=file:///media/dvd enabled=0. I'm able to search packages. However while installation I'm getting below error: [root@localhost dvd]# yum install libstdc++.x86_64 Loaded plugins: rhnplugin, security This system is not registered with RHN. RHN support will be disabled. Setting up Install Process Nothing to do My Yum Search output: [root@localhost dvd]# yum search gcc Loaded plugins: rhnplugin, security This system is not registered with RHN. RHN support will be disabled. ============================================================================= Matched: gcc ============================================================================= compat-libgcc-296.i386 : Compatibility 2.96-RH libgcc library compat-libstdc++-296.i386 : Compatibility 2.96-RH standard C++ libraries compat-libstdc++-33.i386 : Compatibility standard C++ libraries compat-libstdc++-33.x86_64 : Compatibility standard C++ libraries cpp.x86_64 : The C Preprocessor. libgcc.i386 : GCC version 4.1 shared support library libgcc.x86_64 : GCC version 4.1 shared support library libgcj.i386 : Java runtime library for gcc libgcj.x86_64 : Java runtime library for gcc libstdc++.i386 : GNU Standard C++ Library libstdc++.x86_64 : GNU Standard C++ Library libtermcap.i386 : A basic system library for accessing the termcap database. libtermcap.x86_64 : A basic system library for accessing the termcap database. Please guide me on this, I want to install gcc on my RHEL.

    Read the article

  • Discover the public ip of a network without being connected

    - by Martin Trigaux
    Let say, I'm next to a network and can see the traffic (with airodump or similar tool) but can not decipher it (because I am not connected on the network). Is it possible to discover the public ip address of the network ? I know the MAC address of the users connected on the network but do I know the one of the router ? If yes, maybe there is a way to do the matching. I know IP addresses are not forever but some addresses are static and never change. Maybe there is a database of MAC address having recorded that. Google has a database that match MAC address and geographical coordinates so why not with IP addresses ? Other idea, if I know where am I, I can maybe guess the IP range used in the city by the ISP (is it findable ?) and then try to "ping" each IP on the range (if it is a /24, it's possible, even /16 maybe). Will I get some information like the MAC of the box or see some traffic on the network ? These are two ideas I had. I don't know if they are doable, certainly not perfect. Do you think of some others ? By trying several methods, maybe I can get a guess with a bit of luck. Thank you

    Read the article

  • Connecting to a subdomain severs the connection to the domain itself. What's going on?

    - by TheAgent
    Hi all. We have a website on a third-party server (server leased and shared with other websites) and the server provides access to our SQL Server database through a subdomain in the form of mssql.DomainName.com. I was told to use SQL Management Studio Express to connect to this subdomain in order to manage the database. After a few tries and getting many "Timeout" messages, I finally manage to connect to the server; everything's fine. But now I can't connect to DomainName.com anymore. Trying to browse DomainName.com using Firefox, it tries to "lookup" DomainName.com address and fails, telling me "the server was not found". I have to disconnect Management Studio from the server and wait a couple of hour for DomainName.com to become available again, and after that, trying to reconnect to the SQL Server again repeats the scenario. While I can't browse DomainName.com directly, I can use a proxy to connect to it, meaning that the problem is somehow related to a DNS my computer tries to ask to translate the name to the corresponding IP. Anyone seen anything like this before? Any ideas? Thanks in advance.

    Read the article

  • Clone Microsoft VM > SharePoint issues

    - by Rob
    Hi, We have an existing (production) Hyper-V VM that I want to clone to create a staging server. This server is NOT on a domain/active-directory - uses all local computer accounts. It is Windows 2008 OS, SharePoint 2007, SQL server 2008, Reporting Services, and some custom line-of-business web-applications. We rename the server etc as per the documentation we have, but are having troubles specifically with SharePoint. Some of the sites do not come up, and some issues adding/updating site settings such as site owners. Does anyone know of a definitive resource to this process? thanks Update: We seem to have most working, but the Shared Service Provider, and the SSP's admin site, are not working. In addition, the custom web-parts that reference IIS Session objects are failing (which seem to be related to the Share Service Provider). All the Windows user accounts for various services are renamed during the server cloning, but the windows account names in SQL server are not automatically changed. We've tried to change them in SQL server but still does not seem to work Seems like allot of work - and am thinking that there must be some guidance out there. also getting this: NullReferenceException: Object reference not set to an instance of an object.] Microsoft.Office.Server.Administration.SqlSessionStateResolver.System.Web. IPartitionResolver.ResolvePartition(Object key) +135

    Read the article

  • Measure Upload Speed between a client and our server

    - by tresstylez
    We host a SAAS application specially customized for multiple clients. For one customer in particular -- they are reporting sporadic performance issues from various locations on their network, in particular UPLOADING documents through a form on our website. The client claims they have "bandwidth to spare" and that utilization of their "pipe" is so low that it MUST be our application, but our application has MANY clients and all features are working fine for all other clients. Interestingly enough -- DOWNLOADS (ie. just accessing the website, or downloading documents) is working fine. Speed test shows that they should get 1.2Mbps UP. So, a 3MB file should take 20 secs to upload. It takes 60+ seconds on their network. Sometimes even small files take OVER 10 minutes to upload or they timeout. Pings and Traceroutes don't show any abnormally long hops or response times. They claim other SAAS applications they use allow them to upload just fine. Both IT teams are working together to resolve this issue. What kind of data can I request from the clients to begin ruling things out. Seems like we need to somehow measure LATENCY of the networks involved or even at the switch level, we need to understand if packets are getting dropped somewhere and why. Where should I start? Any help is appreciated. I'll provide more info upon requests

    Read the article

  • Google Drive terminates without error on startup

    - by Iszi
    I've used Google Drive for awhile now, but it won't start up after installing on my latest system re-build. I'm still using the same OS, hardware, and basic software load (antivirus, firewall, etc.) that I have for years during which I had not previously had problems with Drive. OS: Windows 7 Ultimate x64 Google Drive Version: 1.12.5329.1887 Now, whenever I try to run Google Drive, it just spawns two instances of the executable which die shortly after. No error messages are posted to the desktop, and nothing indicating any problem is written to the Event Log. After some research, I've yet to find anyone having the same problem who's found an answer. I did find out how to run Google Drive in diagnostic mode, using the --vv parameter at the command line. After that, I opened up the sync log and got this: 2013-10-31 17:11:24,039 INFO pid=3664 1892:MainThread logging:1600 OS: Windows/6.1.7601-SP1 2013-10-31 17:11:24,039 INFO pid=3664 1892:MainThread logging:1600 Google Drive (build 1.12.5329.1887) 2013-10-31 17:11:24,039 DEBUG pid=3664 1892:MainThread logging:1608 DEBUGGING DUMP is ON. 2013-10-31 17:11:24,051 ERROR pid=3664 1892:MainThread logging:1575 ERROR, UNEXPECTED EXCEPTION 2013-10-31 17:11:24,051 ERROR pid=3664 1892:MainThread logging:1575 [Error 5] Access is denied Traceback (most recent call last): File "<string>", line 232, in Main File "<string>", line 118, in RegisterCustomFileTypes File "P:\p\agents\hpal4.eem\recipes\353983091\base\b\drb\googleclient\apps\webdrive_sync\windows\build\pyi.win32\main\outPYZ1.pyz/windows.registry", line 62, in GetValue WindowsError: [Error 5] Access is denied 2013-10-31 17:11:24,052 INFO pid=3664 1892:MainThread logging:1600 Crash reporting disabled. Ignoring report. 2013-10-31 17:11:24,052 INFO pid=3664 1892:MainThread logging:1600 Exiting with error code: 0 I'm running on an account with Administrator-level permissions, and have even tried using "Run As Administrator" on the EXE. I'm not sure why it's looking for a P:\ drive, as no such volume has ever been mounted on this system. What should I do to try to further troubleshoot, and resolve, this issue?

    Read the article

  • User permissions linux. (proftpd / nginx)

    - by user55745
    I've been having a complete nightmare trying to configure proftpd. I've got proftp server working with an sql database. However I want to have any files uploaded able to viewed by the webserver running on the same box. The folders get created in /var/tmp/ as rwx------ 2 ftpuser ftpgroup 4096 Oct 8 20:35 50730c4346512 drwx------ 2 ftpuser ftpgroup 4096 Oct 8 20:38 50730f3a811ca I've tried adding www-data to group with the following usermod -g www-data ftpuser But this doesn't allow the web server access. In proftpd.conf I have the following umask set Umask 0022 It doesn't seem to make a difference what I set that value to. /etc/group (sure I've messed up one of these two but I'm getting desperate) ftpgroup:x:2001:www-data www-data:x:33:ftpgroup /etc/passwd www-data:x:33:33:www-data:/var/www:/bin/sh proftpd:x:108:65534::/var/run/proftpd:/bin/false ftp:x:109:65534::/srv/ftp:/bin/false ftpuser:x:2001:33:proftpd user www-data:/bin/null:/bin/false The ftpuser table in the database has uid / gid set to 2oo1 for both. I'm going absolutely crazy trying to solve this any help would be greatly appreciated. p.s Also, although if I manually connect to the ftp server I can upload files via FileZilla. Although this isn't working for the web-camera, although there is talky talky going on between the server and the camera.

    Read the article

  • Weird glitches on Intel iGPU

    - by FrederikVds
    I have a weird problem that I can't manage to describe in one word, so I'm having trouble searching for a solution. My monitors sometimes go black for a tenth of a second. Other times, they show the image shifted a few centimeters to the left or to the right. This happens on both of my monitors, but not necessarily at the same time. I would say it happens about once a minute, unless under heavy load, in which case it can happen every second or so. Interestingly, heavy CPU/memory usage can also cause this, not just heavy GPU usage. This only happens when they are both at 1920x1080, not when one of them, or both, are at a lower resolution. It also happens when they are in mirrored mode instead of extended desktop mode. My GPU is obviously not at full clock speed most of the time: this happens at 350 MHz as well as at 1200 MHz. So it doesn't seem like a matter of too little performance. I'm not seeing any traditional artefacts, even when I use MSI Kombustor, only this kind of full-screen glitches. CPU stressing software isn't reporting any issues either. I'm thinking maybe the connection between my CPU and my PCH isn't fast enough, but I can't find anyone with the same problem to confirm that. I'd rather not invest in a discrete GPU without being certain it will fix something. Does anyone know how to search for this, or even better, does anyone have a solution? My full specs are below. Thanks in advance! Specs: ASUS P8Z77-M Intel Core i5-3570K (with Intel HD 4000 Graphics) 2x4 GB AMD Performance Edition memory Corsair Force 3 Series Rev. B 120GB SSD Maxtor 200GB HD Lite-On DVD-RW Antec 350 Watt PSU 64-bit Windows 7 Professional 2x Iiyama ProLite E2208HDS display

    Read the article

  • How can I stop Excel from eating my delicious CSV files and excreting useless data?

    - by atroon
    I have a database which tracks sales of widgets by serial number. Users enter purchaser data and quantity, and scan each widget into a custom client program. They then finalize the order. This all works flawlessly. Some customers want an Excel-compatible spreadsheet of the widgets they have purchased. We generate this with a PHP script which queries the database and outputs the result as a CSV with the store name and associated data. This works perfectly well too. When opened in a text editor such as Notepad or vi, the file looks like this: "Account Number","Store Name","S1","S2","S3","Widget Type","Date" "4173","SpeedyCorp","268435459705526269","","268435459705526269","848 Model Widget","2011-01-17" As you can see, the serial numbers are present (in this case twice, not all secondary serials are the same) and are long strings of numbers. When this file is opened in Excel, the result becomes: Account Number Store Name S1 S2 S3 Widget Type Date 4173 SpeedyCorp 2.68435E+17 2.68435E+17 848 Model Widget 2011-01-17 As you may have observed, the serial numbers are enclosed by double quotes. Excel does not seem to respect text qualifiers in .csv files. When importing these files into Access, we have zero difficulty. When opening them as text, no trouble at all. But Excel, without fail, converts these files into useless garbage. Trying to instruct end users in the art of opening a CSV file with a non-default application is becoming, shall we say, tiresome. Is there hope? Is there a setting I've been unable to find? This seems to be the case with Excel 2003, 2007, and 2010.

    Read the article

  • W7-pro indexing mydoc on disk partition does not work

    - by Yvan Thery
    I am working on a HP-7100 mini tower running W7 Pro 64bits. My Local HD includes C:/ + 2 disk partitions : all my documents are located on disk partition L:/ and all my media files are on disk partition M:/ The indexing process works well on C:/ and M:/ but does not index the L:/ any more also all of them are allowed to be indexed, also the system is present on all drive security tabs. I have tested to rebuilt the indexing file with a new setting including few directories present on drive C/M/L but still with L: does not work ! One more thing I can tell you is that even after rebuilding the indexing file, I can find some residual directories or files which are out of the test selection. It is like unerased components remaining in the indexing database. As I do not know precisely how the indexing process works it is hard to know what to do ... Recently I had a bad time after using a past restoration procedure ... maybe it did corrupt the indexing file ???? If I start indexing the all L:/ disk partition the system stop at 39 found index also many more are existing .... Does any one from you guys could advise the process to create a new indexing database ... ? Any idea to get out of this mess ? Many thanks for assistance Yvan

    Read the article

< Previous Page | 703 704 705 706 707 708 709 710 711 712 713 714  | Next Page >