Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 500/903 | < Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >

  • Problems starting MySQL on Mac OS X

    - by Jon
    I am not able to start MySQL server on Mac OS X 10.4.11. MySQL was installed using Macports. MySQL was running fine until it suddenly died without any obvious reason. When running "mysql", I get the error message: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/opt/local/var/run/mysql5/mysqld.sock' (2) If I try to start MySQL manually, I get the following error message: sudo /opt/local/share/mysql5/mysql/mysql.server start Starting MySQL/opt/local/share/mysql5/mysql/mysql.server: line 159: kill: (636) - No such process ERROR! In /etc/mysql/my.cnf I have: socket = __PREFIX/var/run/mysqld/mysqld.sock But the path "opt/local/var/run/mysqld/" does not exist on my system. I tried to change the socket path to "__PREFIX/var/run/mysql5/ mysqld.sock" (which is where the socket is located). Unfortunately, this did not help either. Owner and Permissions for /opt/local/var/run/mysql5/ are correctly set. Any suggestions on how to start MySQL again? Thanks for your advice.

    Read the article

  • Temporarily disable an AD server

    - by 3molo
    Topology and setup We have main office A, and branch office (abroad) B. Our ISP somehow messed up the MPLS, and office A<B will not be connected until a few days. At location B, we have an AD (and the other two ADs at location A). Location A also have an exchange server. The problems A few users at A have problem to login to their computers running Windows XP, the logon process kind of hangs where "Applying computer policies". Additionally, I can't start the Exchange management shell, it fails on get-recipient because the AD abroad (location B) is unreachable. Solution? I could delete the AD at B, but Im pretty sure it will be a hazzle to re-join it, and since the office is abroad it's not an option to just go there and re-install it - re join, I now wonder how I in location As primary and secondary ADs can temporarily disable AD at location B.

    Read the article

  • How do *you* track and document routine maintenance?

    - by Zak
    What software or system do you guys out on server fault use to remind you to do routine maintenance? How do you checklist and log the various items you are supposed to check? Do you have an internal process document? Do you have cron mail you every week with reminders to check system logs? Also, do you work on a team to do system maintenance, and if so, how do you coordinate who will do what maintenance? If you use a bug/issue tracking system to enter tasks, do you have a cron job enter recurring tasks?

    Read the article

  • What is the best strategy for transforming unicode strings into filenames?

    - by David Cowden
    I have a bunch (thousands) of resources in an RDF/XML file. I am writing a certain subset of the resources to files -- one file for each, and I'm using the resource's title property as the file name. However, the titles are every day article, website, and blog post titles, so they contain characters unsafe for a URI (the necessary step for constructing a valid file path). I know of the Jersey UriBuilder but I can't quite get it to work for my needs as I detailed in a different question on SO. Some possibilities I have considered are: Since each resource should also have an associated URL, I could try to use the name of the file on the server. The down side of this is sometimes people don't name their content logically and I think the title of an article better reflects the content that will be in each text file. Construct a white list of valid characters and parse the string myself defining substitutions for unsafe characters. The downside of this is the result could be just as unreadable as the former solution because presumably the content creators went through a similar process when placing the files on their server. Choose a more generic naming scheme, place the title in the text file along with the other attributes, and tell my boss to live with it. So my question here is, what methods work well for dealing with a scenario where you need to construct file names out of strings with potentially unsafe characters? Is there a solution that better fills out my constraints?

    Read the article

  • Black screen on login, can get thru decrypt disk and access command line but no GUI

    - by t3lf3c
    Running 12.04 64 bit fresh alternative install, with disk crypto on a new Lenovo laptop Install didn't connect and install modules, even though I had the network cable plugged in and don't have any whacky proxy settings. I had to manually install ubunut-desktop and define sources after initial installation, so this seemed a bit weird (ISO matched MD5 sum though) I unplug the network cable, otherwise I get a black screen that I can do nothing with. So I turn laptop on, I have disk encryption, I type in the password at the Ubuntu decryption GUI then get "set up successfully" message "Waiting for network configuration ..." then "Waiting for up to 60 more seconds for network configuration" At this stage (a) If I wait for it then I get a black screen that I can do nothing with. (b) If I interrupt the process by pressing escape, then I break through to the command line. From the command line, I can go ahead and login, then plug my network cable in to do apt-get commands. As a precaution I do some house keeping which takes a few mins to run: sudo apt-get update sudo apt-get upgrade Running startx to get to the GUI gives: Fatal server errror: no screens found The .Xauthority file is being created in my home directory but it's empty. I review my order and note the system graphics: Intel HD Graphics (WWAN or mSATA capable) So it's weird that I can't get to the Gnome. It looks like drivers aren't working. Is there a way of getting Intel drivers from the command line? Or do you have any other suggestions on what to try next?

    Read the article

  • Webcast - Oracle Database In-Memory Option

    - by Thanos Terentes Printzios
    Next to the recent announcement by Larry Ellison on the Future of the Database, we are happy to share this exclusive series of live webcasts from Oracle Database Product Management, where you can learn more about the brand new Oracle Database 12c In-Memory option. Oracle Database In-Memory is Oracle’s new memory-optimized technology that transparently accelerates analytic, data warehousing, and reporting workloads, while also accelerating transaction processing (OLTP) workloads. Participants will learn about Oracle Database In-Memory benefits, features, and leading edge architecture.  The Database In-Memory architecture provides the ability to easily process data orders of magnitude faster by simply enabling the feature and identifying tables to bring in-memory without application changes. Details on Oracle Database In-Memory’s ease of use and management, scalability, and availability will also be covered. Please join us to learn more about Oracle Database In-Memory and get first-hand knowledge of this important new feature. Delivery Format This FREE online LIVE eSeminar will be delivered over the Web.These Oracle webcasts are FREE for Customers, System Integrators, ISVs, VARs and Platform Partners. Presenter: Richard Jacobs, Oracle Solution Architect  Europe Webcast 1 Date: August 29, 2014 @ 10:00 am to 11:00 am Central European Summer Time (CEST)Register Here! Europe Webcast 2 Date: September 29, 2014 @ 10:00 am to 11:00 am Central European Summer Time (CEST)Register Here!

    Read the article

  • Updating Banshee to 2.4

    - by Lucasguy11
    I have banshee 2.2.1 with Ubuntu 11.10 I have been trying to update banshee to 2.4 (released yesterday) but it just isnt working, I have been using sudo add-apt-repository ppa:banshee-team/ppa in terminal, from the Banshee.fm website. but after running through terminal it says this: sudo add-apt-repository ppa:banshee-team/ppa You are about to add the following PPA to your system: PPA for Banshee Team This PPA contains the latest stable debs of Banshee for Ubuntu. To install Banshee, you must first enable the PPA on your system: 1. Open Software Sources (System->Administration->Software Sources) 2. Navigate to the "Third Party Sources" tab. 3. Click "Add" 4. Enter the APT line below that corresponds to your Ubuntu version that starts with "deb". 5. Click "Add Source" 6. Click "Close" 7. It will prompt you to reload your software cache. Click "Reload". 8. Now install the package "banshee" from Synaptic, or using the command below: sudo apt-get install banshee For those who wish to compile from trunk, add the deb-src line and then run "sudo apt-get build-dep" to install all required dependencies before starting to compile. Unstable (version which have odd minor version numbers) debs of Banshee can be found here: https://launchpad.net/~banshee-team/+archive/banshee-unstable More info: https://launchpad.net/~banshee-team/+archive/ppa Press [ENTER] to continue or ctrl-c to cancel adding it Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /tmp/tmp.OPAjxemDQr --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv 9D2C2E0A3C88DD807EC787D74874D3686E80C6B7 gpg: requesting key 6E80C6B7 from hkp server keyserver.ubuntu.com gpg: key 6E80C6B7: "Launchpad PPA for Banshee Team" not changed gpg: Total number processed: 1 gpg: unchanged: 1 I believe I have the ppa but, im not sure. I need a step by step process to get this, ive been trying to figure it out for quite a while now...

    Read the article

  • Enjoy Seamless Reading at Twitter in Chrome

    - by Asian Angel
    Twitter can be a lot of fun but having to constantly use the More Button to view a large number of tweets is frustrating. All that you need to be rid of that frustration is the More Tweets! extension for Google Chrome. Before Here it is…the classic “More Button”. If you are only interested in viewing a few tweets on occasion then it is not a problem. But if you are looking at a large number of tweets on a daily basis then it can be very frustrating. Notice the last tweet from TinyHacker shown here… After After installing the extension the only thing that you will need to do is refresh your Twitter page if you had it open before-hand. Now there will be a seamless connection from page to page when you are reading through tweets. You can see the TinyHacker tweet from above followed oh so nicely by tweets from the second page…this is definitely an improvement. For those who may be curious if you are quick enough with your mouse you can see what the “automated connection process” looks like. Conclusion If you are tired of constantly clicking the “More Button” and just want to read tweets without interruption then you will be very satisfied after adding this extension to your browser. Links Download the More Tweets! extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Integrate Twitter With Microsoft OutlookMake Mail.app’s Reading Pane More Like OutlookBlip.fm is a Fun Social Way to Share MusicDisable YouTube Comments while using ChromeAdd Shareaholic Goodness to Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data

    Read the article

  • linux to linux, 10TB transfer?

    - by lostincode
    I've looked at all the previous similar questions, but the answers seemed to be all over the place and no one was moving a lot of data (100GB != 10TB). I've got about 10TB that I need to move from one raid to another, gigabit net, XFS file systems. My biggest concern is having the transfer die midway and not being able to resume easily. Speed would be nice, but ensuring transfer is much more important. Normally I'd just tar & netcat, but the raid I'm moving from has been super flaky as of late and I need to be able to recover and resume if it drops mid process. Should I be looking at rsync?

    Read the article

  • VSDB to SSDT part 3 : command-line deployment with SqlPackage.exe, replacement for Vsdbcmd.exe

    - by Etienne Giust
    For our continuous integration needs, we use a powershell script to handle deployment. A simpler approach would be to have a deployment task embedded within the build process. See the solution provided here by Jakob Ehn (a most interesting read which also dives into the '”deploying from Visual Studio” specifics) : http://geekswithblogs.net/jakob/archive/2012/04/25/deploying-ssdt-projects-with-tfs-build.aspx   For our needs, though, clearly separating our build phase from our deployment phase is important. It allows us to instantly deploy old versions. Also it is more convenient for continuous integration. So we stick with the powershell script approach. With VSDB projects, that script used to call the following command (the vsdbcmd executable was locally available, along with needed libraries): vsdbcmd.exe /a:Deploy /dd /cs:<CONNECTIONSTRING TO TARGET DB> /dsp:SQL /manifest:< PATH TO .deploymanifest FILE>   To be able to do the approximately same thing with a SSDT produced file (dacpac), you would call this command on a machine which has VS2012 installed (or the SSDT installed, see here : http://msdn.microsoft.com/en-us/library/hh500335%28v=vs.103%29):   C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\SqlPackage.exe /Action:Publish /SourceFile:<PATH TO Database.dacpac FILE> /Profile:<PATH TO .publish.xml FILE>   And from within a powershell script :   & "C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\SqlPackage.exe" /Action:Publish /SourceFile:<PATH TO Database.dacpac FILE> /Profile:<PATH TO .publish.xml FILE>   The command will consume a publish.xml file where the connection string and the deployment options are specified. You must be familiar with it if you have done some deployments from visual studio. If not, please refer to the above mentioned article by Jakob Ehn.   It is also possible to pass those parameters in the command line. The complete SqlPackage.exe syntax is detailed here : http://msdn.microsoft.com/en-us/library/hh550080%28v=vs.103%29.aspx

    Read the article

  • LastPass Now Monitors Your Accounts for Security Breaches

    - by Jason Fitzpatrick
    Staying on top of security breaches and how they may or may not affect you is time consuming. Sentry, a new and free addition to the LastPass password management tool, automates the process and notifies you of breaches. In response to all the recent and unfortunate high-profile security breaches LastPass has rolled out Sentry–a tool that monitors breach lists to notify you if your email appears in a list of breached accounts. The lists are supplied by PwnedList, a massive database of security breach data, and securely indexed against your accounts within the LastPass system. If there is a security breach and your email is on the list, you’ll receive an automated email notice indicating which website was compromised and that your email address was one of the positive matches from the breach list. LastPass Sentry is a free feature and, as of yesterday, is automatically activated on all Free, Premium, and Enterprise level accounts. Hit up the link below to read the official announcement. Introducing LastPass Sentry [The LastPass Blog] How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • multicast tcpdump and subscriptions

    - by Karoly Horvath
    From the multicast howto: IP_ADD_MEMBERSHIP. Recall that you need to tell the kernel which multicast groups you are interested in. If no process is interested in a group, packets destined to it that arrive to the host are discarded. If you don't do that, you won't see those packets with tcpdump. Is it possible to subscribe to all multicast traffic so I can do a tcpdump for all existing traffic? I would think IGMP doesn't allow this, so probably not.. but maybe you can configure a switch to still send all multicast traffic. Is that possible? Is it possible to do subscription (for a specific IP) with a command line tool? (note: I know how to do this in C.. but would prefer to use an existing tool and not compile a separate program for this)

    Read the article

  • ASP.NET developers turning to Visual WebGui for rich management system

    - by Webgui
    When The Center for Organ Recovery & Education (CORE) decided they needed a web application to allow easy access to the expenses management system they initially went to ASP.NET web forms combined with CSS. The outcome, however, was not satisfying enough as it appeared bland and lacked in richness. So in order to enrich the UI and give the web application some glitz, Visual WebGui was selected. Visual WebGui provided the needed richness and the familiar Windows look and feel also made the transition for the desktop users very easy. The richer GUI of Visual WebGui compared to ASP.NET conveyed some initial concerns about performance. But the Visual WebGui performance turned out to be a surprising advantage as the website maintained good response times. Working with Visual WebGui required a paradigm shift for the development process as some of the usual methods of coding with ASP.NET did not apply. However, the transition was fairly easy due to the simplicity and intuitiveness of Visual WebGui as well as the good support and documentation. “The shift into a different development paradigm was eased by the Visual WebGui web forums which are very active thanks to a large, involved community. There are also several video and web pages dedicated to answering the most commonly asked questions and pitfalls" Dave Bhatia, Systems Engineer who added "A couple of issues such as deploying on IIS7 seemed to be show stoppers at first, however the solution was readily available in a white paper on the Gizmox website.” The full story is found on the Visual WebGui website: http://www.visualwebgui.com/Gizmox/Resources/CaseStudies/tabid/358/articleType/ArticleView/articleId/964/The-Center-for-Organ-Recovery-Education-gets-a-web-based-expenses-management-system.aspx

    Read the article

  • Partner Infoline & Service Portal

    - by uwes
    As an EMEA-wide team we're supporting the daily work of our partners. Our team consists of 24 sales consultants, one third is specialized on the Partner Infoline. Partner Infoline's main focus is to deliver actively and reactively technical pre sales knowledge about the Oracle hardware portfolio to our partners.With infoline we assist our partners in their daily work, furthermore we help to educate our partners to be self sufficient in all aspects and questions about hardware configurations and hardware quotes. For our Infoline Service we use a ticketing system called Service Portal which is widely used within Oracle and delivers a good and stable functionality and availability. Our Infoline-Service provides answers to questions concerning technical pre-sales matters that are related to hardware and the corresponding hardware related software.* You can address these types of questions by sending them to our mailing list: [email protected] The serviceportal will send you an auto-reply including a unique reference number, which will be the identification for your request until it is closed. Depending on the complexity of the request, it might be necessary to forward it to our specialists (servers, storage, tape, Solaris etc.) located whole over Europe. In order to make the whole process smooth here are some recommendations: write your request in English; saves translation-time, when it has to be forwarded to the specialists stating clearly in the title your interest area, like for example "memory in M4000 server". one request/one subject; makes it easier to maintain and keep the correspondence clear and simple. The rule of the service is to provide an answer quick, which means the vast majority of the requests are answered within a couple of hours. However please keep in mind that some requests may need extra work by involving the appropriate person within Europe or even in US. Therefore there is no official SLA for this service. * This excludes Oracle "classic" products and post-sales support. The latter should still be addressed through MOS (http://support.oracle.com)

    Read the article

  • How to view special characters in SQL Management Studio

    - by B Z
    Sql 2005 I have a text column that has special characters stored e.g. CR, LF, but I don't know what they are. I would like to view these characters in management studio. Something like in Notepad ++ Show Symbol Show All Characters. My Goal: I am working on a data conversion from one database to another. When the data is converted and viewed in the native application it is displaying some funky characters like a pipe character. I would like to eliminate these characters during the conversion process.

    Read the article

  • BUILD 2013 Session&ndash;Testing Your C# Base Windows Store Apps

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/build-2013-sessionndashtesting-your-c-base-windows-store-apps.aspx Testing an application is not what most people consider fun and the number of situation that need to be tested seems to grow exponentially when building mobile apps.  That is why I found the topic of this session interesting.  When I found out that the speaker, Francis Cheung, was from the Patterns and Practices group I knew I was in the right place.  I have admired that team since I first met Ron Jacobs around 2001.  So what did Francis have to offer? He started off in a rather confusing who’s on first fashion.  It seems that one of his tester was originally supposed to give the talk, but then it was decided that it would be better to have someone who does development present a testing topic.  This didn’t hinder the content of the talk in the least.  He broke the process down in a logical manner that would be straight forward to understand if not implement. Francis hit the main areas we usually think of such as tombstoning, network connectivity and asynchronous code, but he approached them with tools they we may not have thought of until now.  He relied heavily on Fiddler to intercept and change the behavior of network requests. Then there are the areas you might not normal think to check.  This includes localization, accessibility and updating client code to a new version.  These are important aspects of your app that can severely impact how customers feel about your app.  Take the time to view this session and get a new appreciation for testing and where it fits in your development lifecycle. del.icio.us Tags: BUILD 2013,Testing,C#,Windows Store Apps,Fiddler

    Read the article

  • How to cope with runaway Flash plugin in Google Chrome browser?

    - by Norman Ramsey
    I'm using Google Chrome for Linux, version 5.0.307.11 (Official Build 39572) beta with the Linux Flash plugin version 10.0 r32. Quite often, the Flash plugin goes wild and pegs the CPU with about 95% usage. Laptop gets hot, battery drains. I can diagnose the problem with Chrome's little process monitor (shift-Esc), and I can even kill the plugin, but then when I actually want to use Flash on a page, I can't find a way to restart the plugin; I have to exit and restart Chrome, which with 30 tabs open is a huge hit. Does anyone know what causes this problem? Does anyone have a better workaround (or heaven forfend, a fix)? [I struct out both with search and with Google's help site for Chrome.]

    Read the article

  • Advice on selecting programming languages to concentrate on? (2nd year IT security student)

    - by Tyler J Fisher
    I'm in the process of considering which programming languages I should devote the majority of my coding studies to. I'm a 2nd year CS student, majoring in IT security. What I want to do/work with: Intelligence gathering Relational databases Virus design Snort network IPS Current coding experience (what I'm going to keep): Java - intermediate HTML5 - intermediate SQL (MySQL, Oracle 11g) - basic BASH - basic I'm going to need to learn (at least) one of the following languages in order to be successful in my field. Languages to add (at least 1): Ruby (+Metasploit) C++ (virus design, low-level driver interaction, computationally intensive applications) Python (import ALL the things) My dilemma: If I diversify too broadly, I won't be able to focus on, and improve in a specific niche. Does anyone have any advice as to how I should select a language? What I'm considering + why I'm leaning towards Ruby because of Metasploit support, despite lower efficiency when compared to Python. Any suggestions based on real-world experience? Should I focus on Ruby, Python, or C++? Both Ruby, and Python have been regarded as syntactically similar to Java which my degree is based around. I'm going to be studying C++ in two years as a component of my malicious code class. Thanks, Tyler

    Read the article

  • installer hanging during .net 4.5 framework install

    - by Niall Collins
    I am having trouble installing Visual studio 2012 Premium Edition. I kick off the installer for it but it hangs when installing .net 4.5 framework. I have left it for hours but there is no progression in the progress bar. I have downloaded the .net 4.5 framework separately and tried to install that from msi but that also hangs similarly. The only way to kill it is by killing the process. Any ideas how I could resolve this issue, or what I need to do to trouble shoot it further? Any tips?

    Read the article

  • Migrating to new dovecot server; Dovecot fails to authenticate using old password database

    - by Ironlenny
    I am migrating my companies intranet from a OS X server to an Ubuntu 12.04 server. We use a flat file to store user names and passwords hashs. This file is used by Apache and Dovecot to authenticate users. The Ubuntu server is running Dovecot 2.0 while the OS X server is running Dovecot 1.2. I have already migrated WebDav which uses Apache for authentication. Authentication works. I'm in the process of migrating our Prosody server which uses Dovecot for authentiation. Dovecot is up and running, but when I test authentication using either telnet a login username password or doveadm sudo doveadm auth username, I get dovecot: auth: passwd-file(username): unknown user dovecot: auth: Debug: client out: FAIL#0111#011user=username in my log file. I can use sudo dovecot user username to perform a user lookup and it will return the user's info. I can generate a password hash locally and Dovecot will authenticate the test password just fine.

    Read the article

  • How can I create (or do I even need to create) an alias of a DNS MX record?

    - by AKWF
    I am in the process of moving my DNS records from Network Solutions to the Amazon Route 53 service. While I know and understand a little about the basic kinds of records, I am stumped on how to create the record that will point to the MX record on Network Solutions (if I'm even saying that right). On Network Solutions I have this: Mail Servers (MX Records) Note: Mail Servers are listed in rank order myapp.net Add Sub-Domain MXMailServer(Preference) TTL inbound.myapp.net.netsolmail.net.(10) 7200 Network Solutions E-mail I have read that the payload for an MX record state that it must point to an existing A record in the DNS. Yet in the example above, that inbound.myapp... record only has the words "Network Solutions E-mail" next to it. Our email is hosted at Network Solutions. I have already created the CNAME records that look like this: mail.myapp.net 7200 mail.mycarparts.net.netsolmail.net. smtp.myapp.net 7100 smtp.mycarparts.net.netsolmail.net. Since I am only using Amazon as the DNS, do I even need to do anything with that MX record? I appreciate your help, I googled and researched this before I posted, this is my first post on webmasters although I've been on SO for a few years.

    Read the article

  • Ubuntu server failing daily

    - by deanvz
    Symptoms: Server becomes unresponsive - Increase in load, all services stop Loss of connectivity - Ping/SSH Flush MySQL hosts after reboot - As MySQL refuses new connections Intermittent Apache crashes Generally happens early morning hours - 2 days of the week are however excluded Changes made: Updated the OS - to Ubuntu 10.04.4 LTS Not sure if the MySQL server was also updated in the process Current MySQL version - mysql Ver 14.14 Distrib 5.1.63, for debian-linux-gnu (x86_64) using readline 6.1 Updated Plesk from 10.4.4 Update #47 to 11.0.9 Update #23 Rebooted on almost daily basis All crons stopped for the times corresponding to the server crashes Created a MySQL log to monitor the lock times on queries Possible causes: Failing hardware Incorrect software configuration (MySQL, Apache etc) Responsibilities: Small webserver Runs our billing system - WHMCS Responsible for CRONs Bulk-email solution - No delivery times coincide with server crashes Proposed solutions: Move machine over to VM Format and restore the Plesk server backup and take it from there? Side notes: Seems to be a general Apache failure across all our linux servers - Intermittent problem Are we doing something fundamentally wrong in the Apache config? (I understand that this is a secondary question, just making sure that it isnt possibly holding any relevance)

    Read the article

  • Problem after mysql-server installation I cant install any thing in ubuntu 12.04.1 now

    - by mohammed ezzi
    I'm not an advanced user of Linux and I tried to install work with database so I installed Mysql-server, I think I did same thing wrong so I get in trouble and now I cant install any thing and this what I get when I use apt-get -f install : root@me:~# apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: mysql-server mysql-server-5.5 Suggested packages: tinyca mailx The following packages will be upgraded: mysql-server mysql-server-5.5 2 upgraded, 0 newly installed, 0 to remove and 194 not upgraded. 2 not fully installed or removed. Need to get 0 B/8,737 kB of archives. After this operation, 15.4 kB of additional disk space will be used. Do you want to continue [Y/n]? y dpkg: dependency problems prevent configuration of mysql-server-5.5: mysql-server-5.5 depends on mysql-server-core-5.5 (= 5.5.24-0ubuntu0.12.04.1); however: Version of mysql-server-core-5.5 on system is 5.5.28-0ubuntu0.12.04.3. dpkg: error processing mysql-server-5.5 (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: mysql-server-5.5 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) I tried to remove mysql-server but nothing happened.

    Read the article

  • Lingering tcp connection in LISTEN state

    - by Silvio Donnini
    My java application can sometimes be killed by an external script. This can be done either with SIGTERM or with SIGKILL. The application is a server which receives many connections per second, and it can be killed while trying to serve them. I would like to restart the application whenever it's killed, so I have prepared a script for that purpose. The problem is that, once the app has been killed, the new application instance can't bind to the port used by the previous instance, because the "Address is already in use". The previous instance's process has been definitely terminated, anyway the offending listening port is still there, but it is assigned to bash (or sh on other machines). Obviouly, my goal is to restart the application and let it bind successfully to the previous address. I've tried waiting more than 200 seconds before restarting to no avail, anyway I can't afford to wait that much. I've encountered this problem on all the machines I've ran the application (which is a jetty server with java 1.6). Any suggestion is appreciated, thanks, Silvio

    Read the article

  • Microsoft&rsquo;s new technical computing initiative

    - by Randy Walker
    I made a mental note from earlier in the year.  Microsoft literally buys computers by the truckload.  From what I understand, it’s a typical practice amongst large software vendors.  You plug a few wires in, you test it, and you instantly have mega tera tera flops (don’t hold me to that number).  Microsoft has been trying to plug away at their cloud services (named Azure).  Which, for the layman, means Microsoft runs your software on their computers, and as demand increases you can allocate more computing power on the fly. With this in mind, it doesn’t surprise me that I was recently sent an executive email concerning Microsoft’s new technical computing initiative.  I find it to be a great marketing idea with actual substance behind their real work.  From the programmer academic perspective, in college we dreamed about this type of processing power.  This has decades of computer science theory behind it. A copy of the email received.  (note that I almost deleted this email, thinking it was spam due to it’s length) We don't often think about how complex life really is. Take the relatively simple task of commuting to and from work: it is, in fact, a complicated interplay of variables such as weather, train delays, accidents, traffic patterns, road construction, etc. You can however, take steps to shorten your commute - using a good, predictive understanding of a few of these variables. In fact, you probably are already taking these inputs and instinctively building a predictive model that you act on daily to get to your destination more quickly. Now, when we apply the same method to very complex tasks, this modeling approach becomes much more challenging. Recent world events clearly demonstrated our inability to process vast amounts of information and variables that would have helped to more accurately predict the behavior of global financial markets or the occurrence and impact of a volcano eruption in Iceland. To make sense of issues like these, researchers, engineers and analysts create computer models of the almost infinite number of possible interactions in complex systems. But, they need increasingly more sophisticated computer models to better understand how the world behaves and to make fact-based predictions about the future. And, to do this, it requires a tremendous amount of computing power to process and examine the massive data deluge from cameras, digital sensors and precision instruments of all kinds. This is the key to creating more accurate and realistic models that expose the hidden meaning of data, which gives us the kind of insight we need to solve a myriad of challenges. We have made great strides in our ability to build these kinds of computer models, and yet they are still too difficult, expensive and time consuming to manage. Today, even the most complicated data-rich simulations cannot fully capture all of the intricacies and dependencies of the systems they are trying to model. That is why, across the scientific and engineering world, it is so hard to say with any certainty when or where the next volcano will erupt and what flight patterns it might affect, or to more accurately predict something like a global flu pandemic. So far, we just cannot collect, correlate and compute enough data to create an accurate forecast of the real world. But this is about to change. Innovations in technology are transforming our ability to measure, monitor and model how the world behaves. The implication for scientific research is profound, and it will transform the way we tackle global challenges like health care and climate change. It will also have a huge impact on engineering and business, delivering breakthroughs that could lead to the creation of new products, new businesses and even new industries. Because you are a subscriber to executive e-mails from Microsoft, I want you to be the first to know about a new effort focused specifically on empowering millions of the world's smartest problem solvers. Today, I am happy to introduce Microsoft's Technical Computing initiative. Our goal is to unleash the power of pervasive, accurate, real-time modeling to help people and organizations achieve their objectives and realize their potential. We are bringing together some of the brightest minds in the technical computing community across industry, academia and science at www.modelingtheworld.com to discuss trends, challenges and shared opportunities. New advances provide the foundation for tools and applications that will make technical computing more affordable and accessible where mathematical and computational principles are applied to solve practical problems. One day soon, complicated tasks like building a sophisticated computer model that would typically take a team of advanced software programmers months to build and days to run, will be accomplished in a single afternoon by a scientist, engineer or analyst working at the PC on their desktop. And as technology continues to advance, these models will become more complete and accurate in the way they represent the world. This will speed our ability to test new ideas, improve processes and advance our understanding of systems. Our technical computing initiative reflects the best of Microsoft's heritage. Ever since Bill Gates articulated the then far-fetched vision of "a computer on every desktop" in the early 1980's, Microsoft has been at the forefront of expanding the power and reach of computing to benefit the world. As someone who worked closely with Bill for many years at Microsoft, I am happy to share with you that the passion behind that vision is fully alive at Microsoft and is carried out in the creation of our new Technical Computing group. Enabling more people to make better predictions We have seen the impact of making greater computing power more available firsthand through our investments in high performance computing (HPC) over the past five years. Scientists, engineers and analysts in organizations of all sizes and sectors are finding that using distributed computational power creates societal impact, fuels scientific breakthroughs and delivers competitive advantages. For example, we have seen remarkable results from some of our current customers: Malaria strikes 300,000 to 500,000 people around the world each year. To help in the effort to eradicate malaria worldwide, scientists at Intellectual Ventures use software that simulates how the disease spreads and would respond to prevention and control methods, such as vaccines and the use of bed nets. Technical computing allows researchers to model more detailed parameters for more accurate results and receive those results in less than an hour, rather than waiting a full day. Aerospace engineering firm, a.i. solutions, Inc., needed a more powerful computing platform to keep up with the increasingly complex computational needs of its customers: NASA, the Department of Defense and other government agencies planning space flights. To meet that need, it adopted technical computing. Now, a.i. solutions can produce detailed predictions and analysis of the flight dynamics of a given spacecraft, from optimal launch times and orbit determination to attitude control and navigation, up to eight times faster. This enables them to avoid mistakes in any areas that can cause a space mission to fail and potentially result in the loss of life and millions of dollars. Western & Southern Financial Group faced the challenge of running ever larger and more complex actuarial models as its number of policyholders and products grew and regulatory requirements changed. The company chose an actuarial solution that runs on technical computing technology. The solution is easy for the company's IT staff to manage and adjust to meet business needs. The new solution helps the company reduce modeling time by up to 99 percent - letting the team fine-tune its models for more accurate product pricing and financial projections. Our Technical Computing direction Collaborating closely with partners across industry and academia, we must now extend the reach of technical computing even further to help predictive modelers and data explorers make faster, more accurate predictions. As we build the Technical Computing initiative, we will invest in three core areas: Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable 'just-in-time' processing. This platform will help ensure processing resources are available whenever they are needed-reliably, consistently and quickly. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today's modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop... to the cluster... to the cloud. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. Thinking bigger There is so much left to be discovered and so many questions yet to be answered in the fascinating world around us. We believe the technical computing community will show us that we have not seen anything yet. Imagine just some of the breakthroughs this community could make possible: Better predictions to help improve the understanding of pandemics, contagion and global health trends. Climate change models that predict environmental, economic and human impact, accessible in real-time during key discussions and debates. More accurate prediction of natural disasters and their impact to develop more effective emergency response plans. With an ambitious charter in hand, this new team is ready to build on our progress to-date and execute Microsoft's technical computing vision over the months and years ahead. We will steadily invest in the right technologies, tools and talent, and work to bring together the technical computing community. I invite you to visit www.modelingtheworld.com today. We welcome your ideas and feedback. I look forward to making this journey with you and others who want to answer the world's biggest questions, discover solutions to problems that seem impossible and uncover a host of new opportunities to change the world we live in for the better. Bob

    Read the article

< Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >