Search Results

Search found 16383 results on 656 pages for 'bi applications'.

Page 573/656 | < Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >

  • What is the risk of introducing non standard image machines to a corporate environment

    - by Troy Hunt
    I’m after some feedback from those in the managed desktop or network security space on the risks of introducing machines that are not built on a standard desktop image into a large corporate environment. This particular context relates to the standard corporate image (32 bit Win XP) in a large multi-national not being suitable for a particular segment of users. In short, I’m looking at what hurdles we might come across by proposing the introduction of machines which are built and maintained by a handful of software developers and not based on the corporate desktop image (proposing 64 bit Win 7). I suspect the barriers are primarily around virus definition updates, the rollout of service packs and patches and the compatibility of existing applications with the newer OS. In terms of viruses and software updates, if machines were using common virus protection software with automated updates and using Windows Update for service packs and patches, is there still a viable risk to the corporate environment? For that matter, are large corporate environments normally vulnerable to the introduction of a machine not based on a standard image? I’m trying to get my head around how real the risk of infection and other adverse events are from machines being plugged into the network. There are multiple scenarios outside of just the example above where this might happen (i.e. a vendor plugging in a machine for internet access during a presentation). Would a large corporate network normally be sufficiently hardened against such innocuous activity? I appreciate the theory as to why policies such as standard desktop images exist, I’m just interested in the actual, practical risk and how much a network should be protected by means other than what is managed on individual PCs.

    Read the article

  • What is best configuration settings for Wordpress and MySQL on Win2008 + IIS7 stack?

    - by holiveira
    I currently have four blogs that uses Wordpress running on a shared hosting company. This blogs have a considerable amount of visits and I'm constantly receiving warnings from the hosting company saying that I'm consuming too much server CPU. Considering the fact that I have a dedicated server in another company with plenty of idle resources (it has a quad core Xeon 2.5GHz and 8GB of Ram and run on Win2008) I'm planning to move the blogs to this server in order to have some more freedom. I'm currently using this server to host some web applications using ASP.Net and SQL Express. I've installed a blog to test and it worked fine, but some issues appeared and raised some questions in my mind: How to properly set the permissions in the folders used by wordpress plugins, I mean, what permissions should I set for the IIS_User in some folders so that the plugins works correctly? What's the best caching plugin to use considering this is a Window Server? In the previous hosting company I used the WPSuperCache, but it was a Linux Stack. Or should I ignore the caching plugins and use the Dynamic Caching Feature of IIS7? How can I optmize the MySQL server running in this server (specially the settings regarding memory and caching) How can I protect the admin folders against hacker attacks? I know some people will advice me not to run Wordpress in a Windows stack, but that's my only choice. I don't even know were to start managing and LAMP stack, don't have the time to do so nor the money to rent another server.

    Read the article

  • Windows mounted network drives slow after upgrading switch

    - by Kver
    On our small business network our old 10/100 consumer grade switch gave up the ghost, and we replaced it with a proper business-grade gigabyte switch. After wiring it in our Linux and Mac users immediately got back to working off of network drives; But 2 of our 3 Windows 7 PCs have suddenly experienced a tremendous slowdown with mapped network drives; Windows will become stuck "discovering" a folder causing applications to freeze when trying to open files. It will instantly display and browse files, but the moment you try to open one the bug hits. To remedy this we have our users copying files to the desktop, but it can take a few minutes while windows is stuck "calculating" the time it will take to copy. These aren't big files, mostly excel sheets less than 500KB - these operations are instant on Linux and Mac. (The third Windows machine is having no issues) I've tried remapping the drives, mapping to different drive letters, rebooting, etc. I'm at a loss, because switches are mostly transparent, and it's only after the switch was replaced that the Windows PCs started acting up. What black-magic voodoo am I missing to make Windows work? Thank you.

    Read the article

  • Outbound ports to allow through firewall - core requirements

    - by dunxd
    This question was asked before, but in a rather general way. I'm asking more specifically based on my current requirements. We have a number of remote offices made up of a bunch of PCs and an ASA 5505 which is used as firewall and VPN termination point. In the offices we share the internet connection with one or more other organisations over whom we have very little control, asides from the config on the ASAs. For a bunch of reasons I'd like to lock down these ASA 5505s to only allow outbound traffic to ports used by applications we know we need. I'm putting a standard config to roll out to all the ASAs, and if we need to open up ports for the other orgs we can do it on request. But I want to leave open the most commonly required ports so we can get up and running without waiting on other folks technical staff to get back. I plan to allow the following TCP ports to support email and web access, which I know everyone will need: POP3 (110 and 995) HTTP (80 and 443) IMAP4 (143 and 993) SMTP (25 and and 465) The question really is, what other ports do I need to leave open to allow for "normal" working? I've seen UDP port 53 for DNS as one. Are there any others that would be worth opening up? Just to note - I'll also be setting up monitoring systems to keep an eye on the ports we do allow. Any of the above could be misused of course. We'll also back all this up with signed agreements. But I'm aiming for a technical solutions where I don't have to start out with the full requirements of everyone we share connections with. See also: outbound ports that are always open

    Read the article

  • Can I install windows on an SSD and access data from my old windows HDD?

    - by nzifnab
    I purchased new computer components, switching my hardware from AMD and Radeon to Intel and Nvidia. I kept components from my old computer like the powersupply and two HDDs. Everything appeared to install correctly and the system booted into the BIOS just fine (after a brief snafu with the CPU fan). My goal was to use the two harddrives and just be able to turn on the computer and load up my old windows install with all the files, programs, and documents. I expected to have to call Microsoft to re-register the windows install for the new hardware (since I had to do that last time I upgraded w/ the same windows version). When the computer attempts to boot into windows it briefly flashes a bluescreen and then restarts. System recovery gives a message something like "BadDriver Failover" something something. I assume this is because it's trying to use amd drivers for an intel chipset (or something...?) and I've been as-yet unsuccessful in getting it to boot into my old windows partition. SO! I decided eff it, maybe I'll go visit my nearest Micro Center and buy a 200 GB SSD, install windows onto that, and then... be able to access just the contents of both of my other harddrives? I don't intend on running any of the programs but there were some saved files I would like to salvage from the 500 GB harddrive, The 1.5 TB harddrive only had files on it, no OS or applications so I'd also expect to still be able to access it. Is this possible? Can I install the SSD and only format/install windows onto it and still access the contents from my two transferred-over drives?

    Read the article

  • Despeckle line art

    - by Dour High Arch
    We have a number of line-art charts unfortunately saved as JPEGs. They are now riddled with distracting compression artifacts or "speckles". Is there any way of removing these? I do not have the original files and it will be very difficult to recreate them. I am running Windows 7 and tried Paint.Net; none of the filters help. Posterize washed out all the colors and leaves the speckles. Blur makes text unreadable. Noise Reduction wrecks antialiasing of curved lines, and perversely enhances the speckles, making them look like checkerboards. Yes, I have Googled for software to do this; there are many programs that advertise despeckling but, after my experience with Paint.Net, do not want to experiment with applications that show no before and after images. The only example I have seen that does what I want is from a Photoshop tutorial. I have dozens of files and the tutorial requires considerable manual fine-tuning. I would prefer to automate or batch-process this task. Commercial apps are fine, but I do not want to spend over $600 and learning a complex program for a single task.

    Read the article

  • How to make Microsoft JVM work on Windows 7?

    - by rics
    I am struggling with the following problem. I cannot install MS JVM 3810 properly on Windows 7. When I start Interner Explorer 8 without starting any java 1.1 programs choosing Java custom settings under Internet options causes the crash of the browser. I have some Java 1.1 programs that work well in Internet Explorer 8 on Windows XP after the installation of MS JVM 3810. I know that it is not advised to use this old JVM but it is not a short-term option to port the programs in newer Java since it contains 3rd party components. Complete rewrite is a long-term plan. Strangely jview and appletviewer (jview /a) works from a console so the MS JVM 3810 is not completely busted just IE 8 does not like it. The problem with the appletviewer is that it cannot connect to the server even if both signed and unsigned content in Java custom settings have been set to Enable all. (Since Java custom settings was unreachable due to the crash the modifications - including My computer - were performed through the registry and pre-checked to behave correctly on Windows XP and Internet Explorer 8.) If jview was working then I could at least think of a workaround. Is there a way to configure MS JVM or jview properly on Windows 7? Another options would be: Checking Internet Explorer 9 Beta. Using virtualbox and Windows XP older IE in it. Delaying Windows 7 upgrade. ... Update Finally we have modified all the programs to work parallelly as applet and application as well. This way the programs can still be used from browser on older Windows versions. On Windows 7 the applications are started from the desktop. Installation to all user machine can easily be solved since they already have a large common application drive. The code update is fortunately only a few lines of modification: including a main method in the applet class. Furthermore instead of the starting html page a bat file is used to set the classpath before the startup with jview.

    Read the article

  • Linux servers in a (primarily) Windows (AD) environment

    - by HannesFostie
    When I arrived at my current position, our environment existed almost exclusively of Windows servers. However, I am a big fan of using Linux for certain applications, like the webgallery I was asked to set up, a simple SFTP server, Nagios for monitoring etc. I do fine setting these up, but not being the Linux expert, I am not sure how to properly join these servers to the domain and was therefor wondering what procedures or guidelines other people follow. We often use ping -a to quickly figure out the hostname of a certain server, but this does not seem to work for the linux machines, most likely because of the whole WINS/NetBios thing I assume. I just joined one server to the domain, but probably missed something cause it's not working even after a dnsflush. Next to that, the couple procedures I've found so far are pretty extensive and most of the time don't seem worth the time. Best case scenario, I download some kind of client (smbclient?), enter the domain name and maybe the server to use, supply an administrator password and that's it. Is that possible at all? Thanks

    Read the article

  • Need to set up a proxy on Linksys E3200 to filter home internet

    - by Justin Amberson
    the fact that I have a Linksys E3200 may not be important. I can configure the router through the web interface, but I don't know the things I will be toggling are called. I already do simple port forwarding to access applications on my Mac remotely. So router admin is not something I technically need explained. I'm looking to running a proxy on my home computer, that filters all HTTP traffic that goes through my router. So if my daughter is on her iPad and accesses Safari, my Mac will be the judge of the validity of the request. I need something like NetNanny I guess, but local. Actually, anything that can just filter all port 80 traffic that runs locally, but maybe validates with a password? I truly truly hope this question falls within the bounds of Serverfault. I'm not a total internet newb but I'm at a loss for what to Google. If possible answer this question: Is there a webapp that can listen on port 80, and validate requests to port 80 with a password? If so, can I forward all traffic on port 80 to my Mac, to be re-routed to the user? Is this the same as a VPN? Thank you for your help. Justin

    Read the article

  • Outbound ports to allow through firewall

    - by dunxd
    This question was asked before, but in a rather general way. I'm asking more specifically based on my current requirements. We have a number of remote offices made up of a bunch of PCs and an ASA 5505 which is used as firewall and VPN termination point. In the offices we share the internet connection with one or more other organisations over whom we have very little control, asides from the config on the ASAs. For a bunch of reasons I'd like to lock down these ASA 5505s to only allow outbound traffic to ports used by applications we know we need. I'm putting a standard config to roll out to all the ASAs, and if we need to open up ports for the other orgs we can do it on request. But I want to leave open the most commonly required ports so we can get up and running without waiting on other folks technical staff to get back. I plan to allow the following TCP ports to support commonly required resources: POP3 (110 and 995) HTTP (80 and 443) IMAP4 (143 and 993) SMTP (25 and and 465) The question really is, what other ports do I need to leave open to allow for "normal" working. I've seen UDP port 53 for DNS as one. Are there any others that would be worth opening up? Just to note - I'll also be setting up monitoring systems to keep an eye on the ports we do allow. Any of the above could be misused of course. We'll also back all this up with signed agreements. But I'm aiming for a technical solutions where I don't have to start out with the full requirements of everyone we share connections with. See also: outbound ports that are always open

    Read the article

  • How to improve network performance between two Win 2008 KMV guest having virtio driver already?

    - by taazaa
    I have two physical servers with Ubuntu 10.04 server on them. They are connected with a 1Gbps card over a gigabit switch. Each of these host servers has one Win 2008 guest VM. Both VMs are well provisioned (4 cores, 12GB RAM), RAW disks. My asp.net/sql server applications are running much slower compared to very similar physical setups. Both machines are setup to use virtio for disk and network. I used iperf to check network performance and I get: Physical host 1 ----- Physical Host 2: 957 Mbits/sec Physical host 1 ----- Win 08 Guest 1: 557 Mbits/sec Win 08 Guest 1 ----- Phy host 1: 182 Mbits/sec Win 08 Guest 1 ----- Win 08 Guest 2: 111 Mbits /sec My app is running on Win08 Guest 1 and Guest 2 (web and db). There is a huge drop in network throughput (almost 90%) between the two guest. Further the throughput does not seem to be symmetric between host and guest as well. The CPU utilization on the guests and hosts is less than 2% right now (we are just testing right now). Apart from this, there have been random slow downs in the network to as low as 1 Mbits/sec making the whole application unusable. Any help to trouble shoot this would be appreciated.

    Read the article

  • How do I set up Tomcat 7's server.xml to access a network share with an different url?

    - by jneff
    I have Apache Tomcat 7.0 installed on a Windows 2008 R2 Server. Tomcat has access to a share '\server\share' that has a documents folder that I want to access using '/foo/Documents' in my web application. My application is able to access the documents when I set the file path to '//server/share/documents/doc1.doc'. I don't want the file server's path to be exposed on my link to the file in my application. I want to be able to set the path to '/foo/Documents/doc1.doc'. In http://www3.ntu.edu.sg/home/ehchua/programming/howto/Tomcat_More.html under 'Setting the Context Root Directory and Request URL of a Webapp' item number two says that I can rename the path by putting in a context to the server.xml file. So I put <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html Note: The pattern used is equivalent to using pattern="common" --> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" /> <Context path="/foo" docBase="//server/share" reloadable="false"></Context> </Host> The context at the bottum was added. Then I tried to pull the file using '/foo/Documents/doc1.doc' and it didn't work. What do I need to do to get it to work correctly? Should I be using an alias instead? Are there other security issues that this may cause?

    Read the article

  • Relevance and Necessity of SNMP

    - by Adam Tannon
    Edit: I am in the process of designing a Java-based monitoring tool that will send back periodic "health checks" of a Java app deployed to a cluster of GlassFish servers. I am trying to figure out the best protocol for this monitoring tool to send information back to the monitoring server on. After an initial research effort on my part, it seems like SNMP is just a protocol for monitor-type applications to communicate the "health status" of something (a part of a network, a server, a cluster, an application, etc.) to the rest of the network. If the above is incorrect, please correct me!!! Assuming the generalization is more or less accurate, my next question is: why is this a protocol!?!? In the age of REST/SOAP/TCP protocols, why is there the need for a standardized protocol that only fits one type of application (monitoring)? In other words, if I'm a developer assigned to building a new monitoring tool that periodically polls a server and reports on its CPU and available memory, what advantages does SNMP give me over just POSTing to a RESTful API via plain 'ole HTTP? I'm sure I'm missing something here - I just need someone to help connect the dots! Thanks in advance!

    Read the article

  • securing communication between 2 Linux servers on local network for ports only they need access to

    - by gkdsp
    I have two Linux servers connected to each other via a cross-connect cable, forming a local network. One of the servers presents a DMZ for the other server (e.g. database server) that must be very secure. I'm restricting this question to communication between the two servers for ports that only need to be available to these servers (and no one else). Thus, communication between the two servers can be established by: (1) opening the required port(s) on both servers, and authenticating according to the applications' rules. (2) disabling IP Tables associated with the NIC cards the cross-connect cable is attached to (on both servers). Which method is more secure? In the first case, the needed ports are open to the external world, but protected by user name and password. In the second case, none of the needed ports are open to the outside world, but since the IP Tables are disabled for the NIC cards associated with the cross-connect cables, essentially all of the ports may be considered to be "open" between the two servers (and so if the server creating the DMZ is compromized, the hacker on the DMZ server could view all ports open using the cross-connect cable). Any conventional wisdom how to make the communication secure between two servers for ports only these servers need access to?

    Read the article

  • Folder name in sidebar is changed, how to get it back?

    - by otakustay
    My OSX is Simplified Chinese, so the user folders in finder's sidebar is like this: AirDrop ???? ?? ?? ?? ?? ?? ?? These stand for AirDrop, Applications, Desktop, Documents, Downloads, Movies, Musics and Pictures. However, now with an accident, my "Downloads" folders in sidebar is changed to its original english name, and I don't why, so my sidebar is now look like: AirDrop ???? ?? ?? Downloads ?? ?? ?? I know do symbol link to these folders would cause such case but I cannot find any symbol links to my ~/Downloads folder. So is there any way to get the Chinese name back? it looks so weired and dirty. I have a time machine backup 3 days ago, but I have quite a lot modifications to my user folder these 3 days, so time machine may not be a good choice.

    Read the article

  • What is the typical maximum number of database connections for Oracle running on Windows server ?

    - by Sake
    We are maintaining a database server that serve a large number of clients. Each client typically running serveral client-applications. The total number of connections to the database server (Oracle 9i) is reaching 800 connections on peak load. The windows 2003 server is starting to run out of memory. We are now planning to move to 64bit Windows in order to gain higher memory capability. As a developer I suggest moving to multi-tier architecture with conneciton pooling, which I believe is a natural solution to this problem. However, in order to support my idea, I want the information on: what exactly is the typical number of connections allowed for Oracle database ? What is the problem when the number connections is too high ? Too much memory comsumption ? or too many sockets opened ? or too many context switching between threads ? To be a little bit specific, how could Oracle Forms application scale to thousand of users without facing this problem ? Shall Oracle RAC applied to this case ? I'm sure the answer to this question should depend on quite a number of factors, like the exact spec of the hardware being used. I'm expecting a rough estimation or some experience from the real world.

    Read the article

  • Computer is dying--what should I be looking for?

    - by Will
    Okay, I'm a bit knowledgeable with pooters and such, but i'm confused. My computer is dying slowly, and I'm not sure what part is causing this. Computer details: Vista, dell machine, intel Q6600, 2.4 Core Duo (quad core), standard memory and drive (unknown manufacturer). Symptoms: I would best describe the symptoms as memory corruption. After a couple days on, I start getting applications crashing or failing to open for a lack of "resources". Sounds are corrupted. Onscreen text gets corrupted; the characters of text are garbled, not the pixels on the screen. Video memory seems untouched as I haven't seen any misplaced pixels. Recently I've lost files on disk. I've also experienced errors reporting a supposed lack of disk space, even though I have fifty gigs free. There was one point where I couldn't get to the POST when booting up. After I cleaned everything (see next) this hasn't happened. Diagnostic steps: First thing I did was clean the case. There was a lot of dust buildup on heatsinks, so I cleaned all that up. No help. Next, I disconnected and reconnected everything, from power cables to memory (did not reseat cpu). No change. Last, I ran the standard vista memory diagnostics and ran checkdisk. Both reported no errors found. I have not run any POST tests, now that I think about it. I'm at a loss at this point. Disk appears fine, memory too. I'd expect motherboard issues to result in the thing not booting up, yet it does every time. What should I be looking at? What more can I do?

    Read the article

  • Good support to multiple desktops AND multiple monitors in Linux (Ubuntu)?

    - by Somebody still uses you MS-DOS
    I'm starting to have A LOT of opened windows in my machine. Sometimes within a project, I have e-mail/task management/personal e-mail/twitter, and a lot of different opened applications/terminal in my Linux environment. Nowadays I have 4 worspaces: Corporate management (e-mail) and corporate messenger; Work (Documents, Requisites) Dev (Development, All gVim windows, terminal and Firefox for development) Personal (Personal stuff: personal e-mail, delicious, twitter and so on) Sometimes it would be interesting to have different workspaces to projects instead of this configuration I have nowadays that are classes of work (bad name, I know, but I think you got the idea). I'm starting to think about using two monitors: one with Corporate Management, Work and Personal. The second monitor is only the development state: each workspace here is about a project being worked on instead of groups of works like before. A workspace may be implementing different classes for example. My question is: I just want to change to a second monitor using the mouse. I want to still be able to change workspaces in the same monitor using keyboard shortcuts. The keyboard shortcuts wouldn't change monitors, just worskpaces on the same monitor. Does Linux (Ubuntu 10.04 Lucid Lynx) support this envisioned setup? If so, how?

    Read the article

  • Need software to convert RAR to ZIP ("store" mode - no compression) * Extract->re-archive changes date/time attributes

    - by Larry78
    I have tried all of the following applications (download.cnet.com - the free ones), and none of them will convert an RAR archive to a ZIP, without compressing the files ("store" mode). 7-ZIP would be fine, too. [The RAR is a "solid" archive with password (I know it), files "stored" - no compression, used WinRAR 3.5.1] PeaZip, 7-Zip, FilZip, TugZip, SimplyZipSE, QuickZip, WinShrink. (A couple of the apps let you try, but the program gives an error - indicating how bad the software is. (Like "unknown header # #.") None of these apps will do the conversion at all. IZArc 4.1 comes the closest. It will convert an RAR to a ZIP, but it compresses the zip. There is a general preference setting to "store" - but it doesn't effect conversions. I don't want to extract the RAR files and re-archive them because I need to preserve the modified/created file attributes. IZArc preserves them, but it compresses the files. WinRAR has the option to convert archives, but I get the error "skipping encryped archive" when I try to convert it. It asks for the password first, and I know it's right because that password opens the archive, and I can read/view all the files in the archive.

    Read the article

  • zfs pool error, how to determine which drive failed in the past

    - by Kendrick
    I had been copying data from my pool so that I could rebuild it with a different version so that I could go away from solaris 11 and to one that is portable between freebsd/openindia etc. it was copying at 20mb a sec the other day which is about all my desktop drive can handle writing from the network. suddently lastnight it went down to 1.4mb i ran zpool status today and got this. pool: store state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scan: none requested config: NAME STATE READ WRITE CKSUM store ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c8t3d0p0 ONLINE 0 0 2 c8t4d0p0 ONLINE 0 0 10 c8t2d0p0 ONLINE 0 0 0 it is currently a 3 x1tb drive array. what tools would best be used to determine what the error was and which drive is failing. per the admin doc The second section of the configuration output displays error statistics. These errors are divided into three categories: READ – I/O errors occurred while issuing a read request. WRITE – I/O errors occurred while issuing a write request. CKSUM – Checksum errors. The device returned corrupted data as the result of a read request. it was saying low counts could be any thing from a power flux to a disk event but gave no suggestions as to what tools to check and determine with.

    Read the article

  • PHP Web Server Solution (Apache/IIS)

    - by njk
    I apologize if this is too broad or belongs on Super User (please vote to move if it does). I'm in the process of creating requirements for an internal PHP web server to submit to our architecture team and would like to get some insight whether to use a Windows or *nix platform and what applications would be required. The server will host a small PHP application that will be connecting to SQL Server. The application will need to send mail. We would also like to incorporate a FTP server to allow files to be dropped in. From what I've read regarding a Windows platform using IIS, it seems as though IIS would only be advantageous if using a .NET or ASP application. Does IIS have mail functionality? Or how is mail traditionally configured (esp. on *nix)? Also, does IIS have directory configuration functionality like Apache does with .htaccess? For a Windows based solution; IIS (comes with FTP) Apache (has mod_ftp module) For a *nix based solution; Apache

    Read the article

  • MySQL has stopped accepting connections from other users than root

    - by John
    Hi there. I'm running a mysql-server a long with apache and tomcat on a Gentoo box. To administer mysql I'm using phpMyAdmin. A couple of hours ago I received a call -- a user was unable to login to phpmyadmin. I logged on to phpmyadmin with the root user, and reset the password. The user was still not able to login. I then decided to give it a go myself, and even I wasn't able to login. I tried creating several user accounts, none of them were able to access mysql via jdbc/mysql-client/phpmyadmin. The only user that seems to work is root. What's even more strange is that websites that connect to mysql with a user other than root are still able to login and retrieve content from the database (it's mainly wordpress and a tomcat webapp). I have made sure it's not just cached, I was able to post SQL queries to the database via these web apps still. However, I am unable to login to phpmyadmin/mysql-client with this user and I am also unable to set up a connection with this user for any new web-applications. Any help is immensely appreciated.

    Read the article

  • Where does Windows store MSI files for uninstallation?

    - by Nilzor
    I'm trying to figure out how Windows (XP through 7) is handling installation and uninstallation of MSI files. I have come up in situations where Windows Installer is unable to uninstall because it's missing the original MSI file, which leads me to believe that it stores a copy of all installed MSI packages somewhere. Where? I've had a couple of theories. It expectes it to reside in the same folder as it was installed from. The registry keys in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall does point to the original installation folder, and error messages when the MSI file is missing often point to this. Removing the MSI file from this folder does not hinder the uninstallation process though, so I've refused this theory. C:\Windows\Installer. This folder actually contains a bunch of seemingly randomly named MSI files. But this list is incomplete. I do find entries in the registry key mentioned in 1) which does not have an MSI copy in this folder. So how does this work? How is windows installer able to uninstall MSI-installed applications even though the MSI is not in 1) and not in 2)?

    Read the article

  • Things to consider when building a continuous integration server?

    - by Dave
    I'm new to continuous integration, but immediately realize its value, and I want to get one set up right away. I have played with TeamCity and have it working in a VM great. Now, I don't want to spend money on another system, so I was planning on just doing the VM again on a faster machine (i.e. my dev system). There are a few questions that come to mind with this: Hard disk allocation - how big should it be? Sure, 60GB seems like more than enough, but people also used to think that we'd never need more than 64KB of RAM Backups - is it even important to back up the integration server? Sure, I guess it's nice so that one doesn't have to go through the entire configuration process again, but I would think that's about it. I could snapshot my VM every time I do a configuration change, and then do a backup of applications only (ignore the buildAgent stuff). Migration - if I want to go away from a VM on my dev system, to a new server, which maybe even runs Windows Server 2003, is it easy enough? Perhaps this is a particular point best suited for StackOverflow.

    Read the article

  • Windows XP VM on VMWare ESXi 4.1 "pausing" / blocked occasionally

    - by FelixD
    We have an issue with Windows XP SP3 VMs on VMWare ESXi 4.1.0 (the free version): They sometimes seem to "pause" for several minutes. This happens rarely (maybe once a month per VM, at least noticed only that often), but still is an issue for us. It happens for three different but similar VMs on three pretty different hosts (different hardware). I have the feeling that the "pausing" is not actually the CPU blocking, but probably the harddisks, but not 100% sure. The servers have one IDE disk (C:) and one SCSI (D:) and it might be either of the two. I have seen scheduled tasks simply not starting for up to 9 minutes and then running normally again with normal speed. They were totally blocked. This is not a load issue, the VMWare hosts have average load and the VMs in question already have reserved CPU resources plus high priorities for CPU and disk. The Windows boxes run mainly MySQL, Tomcat, FileZilla server, Cygwin stuff, Java + R applications, VMWare client, Elusiva Terminal Server pro, Nagios client. Not sure if this might be related with any of that software (e.g. Elusiva). Trying to debug this, there was nothing visible in Windows Event log, other logs in C:\Windows, VMWare events etc. Unfortunately the vmware.log file ends with "Log throttled". We found that we ran into 2 VMWare bugs there: The VMWare client writes lots on bogus messages in the vmware.log, which we now disabled (log level error setting) plus the bug that VMWare does not unthrottle the log (at least so far despite VM reboots). I know there is not much guidance and that may also be the reason why I so far didn't find anything related on the web or on ServerFault, but maybe some of this rings a bell with someone? Or please direct me to what more info to post. I hope that the vmware.logs get unthrottled eventually (can't easily restart the hosts at the moment). Thanks for any input!

    Read the article

< Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >