Search Results

Search found 8172 results on 327 pages for 'job interview'.

Page 272/327 | < Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >

  • cause for mysql crash

    - by user1322092
    A cron job automatically restarted my mysql database. What's the cause for the crash, or can you suggest how to resolve or monitor. I would REALLY appreciate your input. 120715 14:38:58 mysqld started 120715 14:38:58 InnoDB: Started; log sequence number 0 411137570 120715 14:38:58 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution 120715 15:14:21 [Note] /usr/libexec/mysqld: Normal shutdown 120715 15:14:23 InnoDB: Starting shutdown... 120715 15:14:25 InnoDB: Shutdown completed; log sequence number 0 411166467 120715 15:14:25 [Note] /usr/libexec/mysqld: Shutdown complete 120715 08:14:25 mysqld ended 120715 08:14:26 mysqld started 120715 8:14:26 InnoDB: Started; log sequence number 0 411166467 120715 8:14:26 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution 121212 09:15:32 mysqld started InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 121212 9:15:58 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 121212 9:17:28 InnoDB: Started; log sequence number 0 554145193 121212 9:17:57 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution

    Read the article

  • Can my employer force me to backup my personal machine? [closed]

    - by Eric B
    Here's the background: Approximately 1.25 years ago, the company I work for was acquired by a larger 400 person company. Before acquisition (and today still) we are all remote employees using our own personal hardware for work-related duties (coding, email, etc). We are approximately 15 employees within the larger organization. Some time after acquisition, the now owning company was slapped with a civil lawsuit. Part of this lawsuit (discovery) is requiring them to retrieve & store from us any related information. Because we were a separate company up until acquisition, there is a high probability that our personal machines might contain information about what the lawsuit alleges (email, documents, chat logs?, etc). Obviously, this depends largely on the person's job function (engineer vs. customer support vs. CEO). All employees are being required to comply. Since acquisition (1.25 yrs), the new company has not provided us with company laptops/desktops. We continue to use personal hardware, licenses, etc for work. Email is via POP3s and not hanging around on the mail server - it's on everyone's client. Documents are spread across personal machines. So, now they want us each to backup our complete personal machines. They are allowing us to create a "personal" folder where we can place personal documents. That single folder will be excluded from backup. Of course, that means total re-arrangement of documents, etc. For most of us, 99% of the data on the machine is NOT related to work. So, what's the consensus? Should we comply? What is their recourse if we do not?

    Read the article

  • Which free RDBMS is best for small in-house development?

    - by Nic Waller
    I am the sole sysadmin for a small firm of about 50 people, and I have been asked to develop an in-house application for tracking job completion and providing reports based on that data. I'm planning on building it as a web application. I have roughly equal experience developing for MySQL, PostgreSQL, and MSSQL. We are primarily a Windows-based shop, but I'm fairly comfortable with both Windows and Linux system administration. These are my two biggest concerns: Ease of managability. I don't expect to be maintaining this database forever. For the sake of the person that eventually has to take over for me, which database has the lowest barrier to entry? Data integrity. This means transaction-safe, robust storage, and easy backup/recovery. Even better if the database can be easily replicated. There is not a lot of budget for this project, so I am restricted to working with one of the free database systems mentioned above. What would you choose?

    Read the article

  • ZFS & Deduplicating FLAC Data

    - by jasongullickson
    I'm experimenting with using ZFS to deduplicate a large library of FLAC files. The purpose of this is twofold: Reduce storage utilization Reduce bandwidth needed to sync the library with cloud storage Many of these files are of the same music tracks but from different physical media. This means that for the most part they are the same and usually close to the same size, which makes me think that they should benefit from block-level deduplication. However in my testing I'm not seeing good results. When I create a pool and add three of these tracks (identical songs from different source media) zpool list reports 1.00 dedupe. If I copy all of the files (make exact duplicates of the three) dedupe climbs, so I know that it is enabled and functioning, but it's not finding any duplication in the original collection of files. My first thought was that perhaps some of the variable header data (metadata tags, etc.) might be mis-aligning the bulk of the data in these files (the audio frames) but even making the header data consistent across the three files doesn't seem to have any impact on deduplication. I'm considering taking alternate routes (testing other dedupe filesystems as well as some custom code) but since we're already using ZFS and I like the ZFS replication options, I'd prefer to use ZFS dedupe for this project; but perhaps it's simply not capable of working well with this sort of data. Any feedback regarding tuning that might improve dedupe performance for this sort of dataset, or confirmation that ZFS dedupe is not the right tool for this job are appreciated.

    Read the article

  • I have to manually change the DNS suffix order every time I connect to VPN. Can I change this permanently or fix the problem somehow?

    - by CarlB
    Sorry in advance but I'm a programmer, not a network engineer, so I'm a noob at this stuff. Anyway, when I am not connected to VPN from my work PC at home, I have the following DNS suffixes listed (real domain names substituted): enterprise.org network.org company.com us.enterprise.org After connecting to VPN, one more DNS suffix is added to the very top of the list: problem-domain.com At this point, most network functions that I can normally perform when actually connected to the LAN in the office are unusable. I get error messages about the network paths not being found and what-not. Anyway, I played around with the suffixes and realized that if I just moved problem-domain.com down one spot to the second in the list, all the problems went away. Unfortunately, it returns to the top spot every time I reconnect, and I tend to get disconnected frequently. Is there something else I can do about this or should I just contact the IT department? I've had this problem before and they weren't able to resolve it but I suppose it would be worth trying again if I could get a different person on the job. What I don't understand is that I thought it didn't matter what order the suffixes were in? Isn't Windows supposed to go through each suffix until it finds a match (or has gone through all the suffixes)? Why is it quitting after the first one? Thanks in advance.

    Read the article

  • Dual Monitor + Virtual Desktop software (plus for cube)

    - by xenithorb
    I've recently purchased another monitor, my first one being a TV and being much larger. I now sit at a desk and use my shiny new 24" LED more often, but I like to extend the desktop into the TV. The problem presented with this is to save power and the longevity of my 47" VIZIO, I try to keep it off when possible. What I'm seeking sounds very simple - If any of you have ever used Compiz or Deskspace (Yod'm) - You'll know what im referring to when I talk about a "cube." The most important functionality I'm looking for is the ability to scroll desktop contents between both displays and virtual desktops. Deskspace does and excellent job of presenting an attractive cube, but it creates a separate cube and virtual desktop space for the second extended monitor (now the TV) - Again, what I'm looking to do is scroll between virtual desktops, by passing through both monitors. The net effect of this functionality would allow me to scroll the contents of the extended monitor to the first monitor should a window get caught there without having to turn on the TV. So imagine the horizontal portion of a cube as being actual real monitors - is there anything that allows one to rotate desktops between displays?

    Read the article

  • Specific issue on data pump API in oracle

    - by Median Hilal
    I have a client/server architecture. Using an Oracle dbms on the database server side. I need to perform a user-triggered (from client side) backup of the database, where the best way to perform that is using a stored procedure on the server side which the client may call, as the client has no oracle tools to perform the backup. I've searched thorough inside available solutions and have found that using a stored procedure is the best way. Well, then I found that using oracle data pump API is the best way to use inside a PL/SQl stored procedure. My specific questions about the API are... I would like to ask about two issues ... ---- The first ----- the detach function to detach the handler, is it necessary to be used at the end of the procedure? and what if I don't use it? I read the Oracle documentation but I didn't get their point, they say it doesn't terminate the job but indicates that the user is not interested in it, an when I use detach at the end of my procedure the exported .dmp file disappears. ---- The second ----- to perform a user (client side) triggered back up as the modification are only to the data, I used TABLE parameter for the export operation. But the version parameter... what should it be? I also read the documentation but couldn't determine what I need (LATEST or COMPATIBLE) ? Thanks

    Read the article

  • Setup Apache with IPv6

    - by mrz
    I have two virtual machine on my computer. I have Apache installed on one of them (would be referred to as "server" after this), and I have set the Apache to listen to an IPv6. And when I enter the IPv6 into web browser on the "server" I see my index.html file. so far so good ... I want to be able to open a web browser on the other virtual machine("client") and see the index.html. But, when I try entering the IPv6 of the "server" in a web browser on the "client" I get an Unable to establish a connection to the server error. I can ping6 "client" from "server" and vice verse. There is only one thing to mention. ifconfig of the "server" shows 3 different IPv6 which two of them are scoped Global and there is one Link scope IPv6. On the "client" there is only one Link scope IPv6 though. I only can ping the Link IPv6s. Pinging other IPv6s would result connect:Network is unreachable. And if I set Apache to listen to Link IPv6, The rcapache2 start will fail the job. Any thoughts on what I am probably missing/doing wrong?

    Read the article

  • Why would the Apache parent process restart silently?

    - by miracle
    I run apache 2.2.9 with mpm prefork on debian lenny. Following http://httpd.apache.org/docs/2.2/mod/prefork.html, I would expect that there is one parent process, running as root and listening as configured, which would start child processes as defined by the Min/Max/etc. directives. I expect the children to be restarted as per MaxRequestsPerChild, but the parent process to stay put with one process id until I restart it manually. Out of a little paranoia, I started monitoring listening ports including process ids. I have a cron job every 20 minutes to run netstat -ap | grep LISTEN and diff the output. Sometimes (about once per day) I see a series of this: 8c8 < tcp6 0 0 [::]:www [::]:* LISTEN 6194/apache2 --- tcp6 0 0 [::]:www [::]:* LISTEN 6607/apache2 10c10 < tcp6 0 0 [::]:https [::]:* LISTEN 6194/apache2 --- tcp6 0 0 [::]:https [::]:* LISTEN 6607/apache2 Over a period of an hour or three, the parent would change its pid at least once every 20 minutes, without any explanation in the log files or any other hint that anything is going wrong. This is not what I expected. What am I missing?

    Read the article

  • Potential impact of large broadcast domains

    - by john
    I recently switched jobs. By the time I left my last job our network was three years old and had been planned very well (in my opinion). Our address range was split down into a bunch of VLANs with the largest subnet a /22 range. It was textbook. The company I now work for has built up their network over about 20 years. It's quite large, reaches multiple sites, and has an eclectic mix of devices. This organisation only uses VLANs for very specific things. I only know of one usage of VLANs so far and that is the SAN which also crosses a site boundary. I'm not a network engineer, I'm a support technician. But occasionally I have to do some network traces for debugging problems and I'm astounded by the quantity of broadcast traffic I see. The largest network is a straight Class B network, so it uses a /16 mask. Of course if that were filled with devices the network would likely grind to a halt. I think there are probably 2000+ physical and virtual devices currently using that subnet, but it (mostly) seems to work. This practise seems to go against everything I've been taught. My question is: In your opinion and  From my perspective - What measurement of which metric would tell me that there is too much broadcast traffic bouncing about the network? And what are the tell-tale signs that you are perhaps treading on thin ice? The way I see it, there are more and more devices being added and that can only mean more broadcast traffic, so there must be a threshold. Would things just get slower and slower, or would the effects be more subtle than that?

    Read the article

  • Using Monit to monitor Resque

    - by Alex
    I'm trying to use resque as a job runner for Rails. I've tried this config, and many other ways of demonizing the rescue task (because running rake resque:work leaves the terminal tied to that command). Unfortunately, their example configuration doesn't work for me. Does the configuration look correct? Or is there another way to turn the process into a daemon? Thank you :) check process resque_worker_QUEUE with pidfile /data/APP_NAME/current/tmp/pids/resque_worker_QUEUE.pid start program = "/bin/sh -c 'cd /data/APP_NAME/current; RAILS_ENV=production QUEUE=queue_name VERBOSE=1 nohup rake environment resque:work& > log/resque_worker_QUEUE.log && echo $! > tmp/pids/resque_worker_QUEUE.pid'" as uid deploy and gid deploy stop program = "/bin/sh -c 'cd /data/APP_NAME/current && kill -s QUIT `cat tmp/pids/resque_worker_QUEUE.pid` && rm -f tmp/pids/resque_worker_QUEUE.pid; exit 0;'" if totalmem is greater than 300 MB for 10 cycles then restart # eating up memory?

    Read the article

  • buagent process has been consuming 100% cpu for two days

    - by Maysam
    The buagent process has been using 100% of cpu since two days ago. I want to terminate this process but I don't know if it's something dangerous or not (I am not much advanced in working with linux, indeed I am very beginner). The only thing that I know is that this process is probably restoring some files. But I think it is not normal for that to take more than two days. Now, do you think it would be OK if I kill this process? What command could I use to do that? I appreciate any help :) p.s. We are hosting a few web sites there. This server is also our Name Server and Mail Server as well. A couple of months a go we had a problem with the server which made us to take a full-backup of all files and then reinstall linux. Yesterday, I selected one of the directories on the backup server and restored that directory to a tmp directory on our linux server. After that, I couldn't restore any other directory because every time I want to do that, it says that there is another restore job running and I have to wait for that. When I use the "top" command I can see that the buagent process is consuming 100% of cpu. So I guess that is the problem. I don't know why it has been taking too long to execute.

    Read the article

  • 1 PC, 2 consoles (as in 2 monitors, keyboards and mice)

    - by ciuly
    I have this desire to "kill 2 birds with one shot". Currently, I have 1 server running round the clock, 1 laptop that runs about 8 hours a day, 7 days a week, and a desktop that runs about the same length of time. All 3 are ... old, to say the least. So there is a great need to upgrade (well, the server might handle its job for another year or so, but that only depends on how much time I have to put it to "work"). Now, I'm "dreaming" of only one PC. I'm thinking vmware's ESX. So there will be a VM for the server, a VM for the "laptop" and one for the "desktop". And obviously I'll have to somehow "link" a set of monitor/keyboard/mouse with one of the laptop/desktop VMs. The server doesn't need such things, obviously (it doesn't have them at this moment either). Is something like this possible? ESX is not a requirement, it's just something I found that answers part of my problems, but there still remains the 2 KVM set that needs connecting and "linking" to appropriate VM. Why I would want to do this? well, first of all, it's much cheaper to upgrade one PC than 3. Then, the power consumption is obviously lower. Plus the extra space.Plus it allows me to better separate networks and services. Thanks.

    Read the article

  • Role of MBR in the booting process

    - by pg4421
    I am new to stack overflow. So please correct me if my question seems irrelevant or stupid. I read here in Booting Process : The job of the primary boot loader is to find and load the secondary boot loader (stage 2). It does this by looking through the partition table for an active partition. When it finds an active partition, it scans the remaining partitions in the table to ensure that they're all inactive. When this is verified, the active partition's boot record is read from the device into RAM and executed. The question is that I am having a Hard disk which has two Operating System images windows and ubuntu and hence both partitions in which they reside are active. Then why do we have only one active partition always? (I know that active partition is one of the primary partition but then why we are giving special reference to one primary partition? ) I am confused a bit. Please solve my query. Thank you so much.

    Read the article

  • 1 PC, 2 consoles (as in 2 monitors, keyboards and mice)

    - by ciuly
    I have this desire to "kill 2 birds with one shot". Currently, I have 1 server running round the clock, 1 laptop that runs about 8 hours a day, 7 days a week, and a desktop that runs about the same length of time. All 3 are ... old, to say the least. So there is a great need to upgrade (well, the server might handle its job for another year or so, but that only depends on how much time I have to put it to "work"). Now, I'm "dreaming" of only one PC. I'm thinking vmware's ESX. So there will be a VM for the server, a VM for the "laptop" and one for the "desktop". And obviously I'll have to somehow "link" a set of monitor/keyboard/mouse with one of the laptop/desktop VMs. The server doesn't need such things, obviously (it doesn't have them at this moment either). Is something like this possible? ESX is not a requirement, it's just something I found that answers part of my problems, but there still remains the 2 KVM set that needs connecting and "linking" to appropriate VM. Why I would want to do this? well, first of all, it's much cheaper to upgrade one PC than 3. Then, the power consumption is obviously lower. Plus the extra space.Plus it allows me to better separate networks and services. Thanks.

    Read the article

  • Linux Has Become Very Slow Dealing With Large Data

    - by Kohjah Breese
    Last year I bought a computer, for around $1,800, so it is relatively high-end. When I first got it I was particularly pleased at how quick it dealt with large MySQL queries, imports and exports. But somewhere along the way something has gone wrong and I am not sure how to diagnose the problem. Any job that involves processing large amounts of data, e.g. gzipping file c. 1GB+, UPDATEs on large MySQL tables etc. have become very slow. I just performed an intensive alter statement on a 240,000,000 row table on a remote server, which is lower spec. This took about 10 minutes. However, performing the same query on a 167,000,000 row table on my computer went fine until it hit 860MB. Now it is only writing about 1MB every 15 seconds. Does anyone have any advice as to debugging what the issue is? I am using LinuxMint (based on Ubuntu 12.04.) The home partition is encrypted, which really slows down gzip. I have noticed the swap is barely used, but am not sure if that is because there is more than enough RAM. The filesystem is ext4. The MySQL server is on a separate hard drive, but it was fine when I first installed it. Other than the above issues, there are no other problems with it. I am going to install a fresh Ubuntu on the 4th hard drive to see if that is any different.

    Read the article

  • EXCEL 2007 macro

    - by Binay
    I have a macro which connects to db and fetches data for me and makes it comma separated. But the problem is the comma is getting appended to the last row, which I don't want. I'm struggling here. Could you please help out? Here is the part from the code. If cn.State = adStateOpen Then Rec_set.Open "SELECT concat(trim(Columns_0.ColumnName), ' ','(', 'varchar(2000)' ,')') columnname FROM DBC.Columns Columns_0 WHERE (Columns_0.TableName= " & Chr(39) & Tablename & Chr(39) & "and Columns_0.Databasename=" & Chr(39) & db & Chr(39) & ")ORDER BY Columns_0.Columnid;", cn 'Issue SQL statement If Not Rec_set.EOF And Not Rec_set.EOF Then Do Until Rec_set.EOF For i = 0 To Rec_set.Fields.Count - 1 strString = strString & Rec_set(i) & "," Next strFile.WriteLine (strString) strString = "" Rec_set.MoveNext Loop Here is the result I am getting. EMPNO (varchar(2000)), ENAME (varchar(2000)), JOB (varchar(2000)), MGR (varchar(2000)), HIREDATE (varchar(2000)), SAL (varchar(2000)), COMM (varchar(2000)), DEPTNO (varchar(2000)), I don't want the last comma.

    Read the article

  • What Logs / Process Stats to monitor on a Ubuntu FTP server?

    - by Adam Salkin
    I am administering a server with Ubuntu Server which is running pureFTP. So far all is well, but I would like to know what I should be monitoring so that I can spot any potential stability and security issues. I'm not looking for sophisticated software, more an idea of what logs and process statistics are most useful for checking on the health of the system. I'm thinking that I can look at various parameters output from the "ps" command and compare to see if I have things like memory leaks. But I would like to know what experienced admins do. Also, how do I do a disk check so that when I reboot, I don't get a message saying something like "disk not checked for x days, forcing check" which delays the reboot? I assume there is command that I can run as a cron job late at night. How often should it be run? What things should I be looking at to spot intrusion attempts? The only shell access is SSH on a non-standard port through UFW firewall, and I regularly do a grep on auth.log for "Fail" or "Invalid". Is there anything else I should look at? I was logging the firewall (UFW) but I have very few open ports (FTP and SSH on a non standard port) so looking at lists of IP's that have been blocked did not seem useful. Many thanks

    Read the article

  • Routing connections through VPN based on hostname (not IP range)

    - by Michal M
    This bugs me immensly. I need to connect to client's network through VPN. But I definitely do not want to send all the traffic through client's network so this option is out of question. What I need basically is for the OS to know that all client's network subdomains (*.example.com) need to go through the VPN connection. I tried a couple of options: Changing order of services and setting the VPN on top, but this works the same as "Send all traffic over VPN connection". Using "VPN on Demand" option from network advanced options, but this feature is quite rubbish to be honest. Seems to work only in Safari (?!) and it doesn't route the connection, but it basically triggers the OS to connect to the selected VPN. The reason I need it to work based on hostnames rather than IP range is simple - my client has a lot of servers inside his network and it's impossible for me to remember all IPs. They are all within a range, but this doesn't help me remembering. Another option would be to put the VPN connection on the bottom of network services and untick "Send all traffic..." and then put all known hostnames in hosts file, but considering there could be hundreds of servers (therefore hostnames and ips too) it ridiculous job. And if new server appears on the network I'd need to edit the hosts file again. Sisyphean labours. However this works on Windows very simply. If a hostname is not available through default network interface, then it seems to try VPN connection and this works brilliantly. So, how can I achieve that on Mac, then? I know client's internal DNS addresses if that is of any help (like directing a certain domains through a different DNS)? PS. Using latest version 10.6.6. PS2. I am using VPN to access intranet, version control servers (svn://), samba shares and for SSH access to servers.

    Read the article

  • When buying hardware, what sites do you trust for information? [closed]

    - by Matt Dawdy
    I won't ask "what laptop should I buy" since the information is likely to change very quickly. However, I am about to buy a laptop, and I honestly don't know where to begin researching this based on my needs. I am hoping that asking about specific sites that do reviews/recommendations that this will still be on topic. I read the 6 guidelines for subjective questions and believe that this question scores favorably. I'm starting a new job in a few weeks, and they want to know what specific laptop to purchase. I'd like to get the most for their money and get a machine that will not need to be replaced in a year. When looking at a site like Dell, it's hard to get a full picture of the performance of a laptop. Does it work with a docking station, and if so, what kinds of video outs are on it? Will it work well when compiling several large projects in .Net? Has anyone had any issues with the machine getting flaky when dragging it from work to home and back all the time? etc. So, if people would enter in their preferred sites they use when researching hardware, and why they prefer that site (x is great for laptop comparisons, y is great for gaming machine reviews, etc) the I hope that this can be a question with valuable answers to others than just myself.

    Read the article

  • How to configure SVN server for my own project

    - by user1729952
    I work with a team on an Android project using Eclipse IDE, we need to use a version control and we need to access the repo remotely, I have no experience using or installing servers, a little experience using SVN on Windows, but I still have problems connecting to it remotely. I need to use no-ip.com to change my IP, however; I failed to make VisualSVN server to work with no-ip. What options do I have? The best thing is to get it work with Windows if not, I have another computer that is running Ubuntu 12.4.1, I have installed apache2svn on it trying to get it work, the svn is installed, I went through tutorials to configure accessing protocols, but I can't figure out how to access it remotely from another computer? Can someone tell me the steps I need to get this job done and I can do my search for each step? (Please explain each step as some keywords or phrases I may not be familiar with) EDIT: Also worth noting, that my company has a website hosted on a remote server, can we use it as a repo? and how? It's running Linux

    Read the article

  • Strange network issue (ZIP file fails CRC test over VPN)

    - by Joe Schmoe
    We have a server in the office running Windows Server 2003 Our office is connected to our datacenter via hardware VPN (Linksys RV082 router in the office to CISCO router in the datacenter). There is a job that runs on the server in the office that does following: ZIP certain files from the server using 7Zip, copy ZIP file to a network share in the office and verify ZIP integrity, copy ZIP file to a network share in the data center and verify ZIP integrity. Problem is - verifying ZIP integrity for the file in the data center always fails. However, if I run 7Zip on the server in data center that exposes that share ZIP file verifies just fine, so it is not actually corrupted during copy operation. Additionally, I tried running ZIP on other computers in the office to verify ZIP file on datacenter file share and it verifies OK. I tried plugging server to the same network port where my workstation is connected using different cable (my workstation doesn't exhibit this problem) and ZIP verification still fails. So the problem is local to that specific server. On network adapter properties for the server in question there is no "Advanced" tab where one can usually configure a lot of network settings. Network card driver is up to date (Windows Update doesn't find anything newer and Lenovo website doesn't have any drivers for Windows 2003 for this computer model). Is there any other way to configure network setting via command line? What settings could be relevant to this problem?

    Read the article

  • Can't upgrade MySQL Server on new Ubuntu 12.04 install

    - by user179627
    After freshly installing Ubuntu server 12.04, I did the usual apt-get update / apt-get upgrade, which failed for mysql-server-5.5: Setting up mysql-server-5.5 (5.5.31-0ubuntu0.12.04.2) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured I tried a wide variety a approaches suggested by googling, which involved various combinations of apt-get remove/purge/install -f/reinstall, etc., with no luck. I also tried downloading the package directly from launchpad.net and running dpkg -i on it (this had worked for a similar issue with a kernel upgrade), but to no avail. I'm not actually particularly interested in what's going on with mysql, per se (though I will need to figure it out at some time); at this point, my primary concern is that I am unable to apt-get install other packages! What to do?

    Read the article

  • Debian 7 and PHP 5.4.4 error reporting

    - by milovan
    I use default php.ini and then in my PHP script (local.settings.php in Drupal) I simply set ini_set('error_reporting', 'E_ALL & ~E_NOTICE & ~E_STRICT'); According to documentation this means "show all messages minus notice and strict warnings". But in my case it still shows strict warnings! I have no idea why, because I clearly stated "~E_STRICT". If I comment it out then I see strict warnings. So it means that default from php.ini "E_ALL & ~E_DEPRECATED & ~E_STRICT" didn't do its job as it also has "~E_STRICT" but I still see strict warnings. On Debian 6 there was Suhoshin patch which was controlling usage of php_ini in PHP scripts. Especially when you try to get more memory than defined cap. Now on debian 7 there is no Suhoshin nor any other security element that might control php_ini. So what might cause php_ini not to be executed? Is there some new variable / setup / other that needs to be checked?

    Read the article

  • Y560 Lenovo IdeaPad bandwidth issue

    - by Vlakarados
    I have a Y560 Lenovo IdeaPad, my config: i7 740QM, 4GB RAM, RADEON HD 6570m/5700 1024MB and my network adapter is Intel WIFI Link 5150. The laptop is 2 years old and the problem I'm about to describe is present from the first day. As may be seen here, the receive bandwidth should be up to 300 Mbps, but the maximum download speed from LAN and using torrents is about 2.4MB/s. My internet connection is 100Mbps and other laptops in my house have the appropriate download speed - up to 12MB/s, I have tested at my friends house and at my job, the speed remains the same. I have tried all possible configurations I could think of in network settings - nothing helps. I use Windows 7 and I have had installed different versions (Ultimate, Professional, Home, OEM Home, 64 and 32 bit versions). Some time ago I searched for the problem and found one or two threads that had the same problem and there were something said about a limitation in firmware that some experienced users have managed to bypass. Updating drivers didn't help me either. Is there any reliable way to fix this?

    Read the article

< Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >