Search Results

Search found 23890 results on 956 pages for 'issue'.

Page 16/956 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Issue in upgrading Zenity

    - by user109187
    I am currently using Ubuntu 12.04 I am attempting to install steam using the post in omgubuntu. However, I am running into problems when the command sudo dpkg -i steam.deb && sudo apt-get install -f. I get the message that the version of Zenity is too low. The minimum required version is 3.4.0-0ubuntu4 while I currently have 3.4.0-0ubuntu2 I tried sudo apt-get install zenity but it does not update it and gives the following message: zenity is already the newest version. I also tried installing via the ubuntuupdates website for the latest version of Zenity, but it still does not work.Ubuntu Software Centre indicates that it is already installed and provides no option to update. :( Any idea how to update Zenity in my system?

    Read the article

  • Grub2 mutual dependency issue

    - by A T
    For various reasons I am installing .deb dependencies for grub2 using dpkg directly (rather than apt-get). root@ubuntu:/dl# dpkg -i grub-gfxpayload-lists_0.6_amd64.deb Selecting previously unselected package grub-gfxpayload-lists. (Reading database ... 249808 files and directories currently installed.) Preparing to unpack grub-gfxpayload-lists_0.6_amd64.deb ... Unpacking grub-gfxpayload-lists (0.6) ... dpkg: dependency problems prevent configuration of grub-gfxpayload-lists: grub-gfxpayload-lists depends on grub-pc (>= 1.99~20101210-1ubuntu2); however: Package grub-pc is not configured yet. dpkg: error processing package grub-gfxpayload-lists (--install): dependency problems - leaving unconfigured Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: grub-gfxpayload-lists By configure I assume it means install+configure, so I tried: root@ubuntu:/dl# dpkg -i grub-pc_2.02~beta2-9_amd64.deb (Reading database ... 249818 files and directories currently installed.) Preparing to unpack grub-pc_2.02~beta2-9_amd64.deb ... Unpacking grub-pc (2.02~beta2-9) over (2.02~beta2-9) ... dpkg: dependency problems prevent configuration of grub-pc: grub-pc depends on grub2-common (= 2.02~beta2-9); however: Package grub2-common is not installed. grub-pc depends on grub-pc-bin (= 2.02~beta2-9); however: Package grub-pc-bin is not installed. grub-pc depends on grub-gfxpayload-lists; however: Package grub-gfxpayload-lists is not configured yet. dpkg: error processing package grub-pc (--install): dependency problems - leaving unconfigured Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: grub-pc How do I solve this problem?

    Read the article

  • G210M Screen brightness control issue

    - by Bapun
    I have a Sony VAIO VPCCW15FG laptop with NVIDIA G210M graphics card. I can't adjust the screen brightness! If I use the Fn shortcuts the brightness notification shows-up and there the brightness changes the level but nothing happens. I was able to adjust the brightness level in ZorinOS. But nothing happens when I changed the bright level, then brightness level changes radically with each step in the last stages.

    Read the article

  • opengl memory issue - quite strange.

    - by user4707
    Hello, I have heard that textures consumes lot of memory but I am surprised how much.... I have 7 textures 1024 16 bit each. And while I will run my app it consumes 57MB of memory. I think that this is "a bit" too much. I am writing 2D application (no cocos or other framework) Strange is that while I will compile my app with disabled rendering methods: glDrawArrays than It uses only 27MB.... which is about 30MB less... Do you have any Idea why? I am creating textures before rendering of course: rendering looks like this: [EAGLContext setCurrentContext:context]; glBindFramebufferOES(GL_FRAMEBUFFER_OES, defaultFramebuffer); glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); glPushMatrix(); glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnable(GL_TEXTURE_2D); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); TeksturaObrazek *obrazek_klaw =[[AppDirector sharedAppDirector] obrazek_klaw]; glBindTexture(GL_TEXTURE_2D, [[obrazek_klaw image_texture] name] ); glVertexPointer(2, GL_FLOAT, 0,vertex1); glTexCoordPointer(2, GL_FLOAT, 0, vertex2); glColor4f(1,1,1,alpha); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glDisable(GL_BLEND); glDisable(GL_TEXTURE_2D); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glPopMatrix(); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; It looks like standard routine... I have spent about 2 days looking for for answer and I still have no clue.

    Read the article

  • Why CoffeeScript is an issue

    - by Renso
    Other than some obvious concerns, my main concern is support in the open source community. "anon" from the CoffeeScript team sent this to me after I requested input from the team to concerns I raised and wanted to get others' take on it:"Thanks for confirming that only idiots willingly program in Java and C#"or the following from the same person:"Oh and finally, you should definitely create jShort. Even though I know you will fail before you even start, I would love to laugh at your attempts and it would be perfect for you since you ride the short bus. "This kind of comment reflects badly on the CoffeeScript team and hence not an option for us as a company to consider. Another example of why some open-source community projects get no traction.

    Read the article

  • using apple-mobile-web-app-capable and cache.manifest issue [migrated]

    - by LocoMike
    So I have this simple html file <!DOCTYPE HTML> <html manifest="cache.manifest"><head> <meta name="apple-mobile-web-app-capable" content="yes"> <meta name="apple-mobile-web-app-status-bar-style" content="black"> <title>Test</title> <meta http-equiv="content-type" content="text/html"> <meta name="HandheldFriendly" content="true"> <meta name="viewport" content="width=320; initial-scale=1.0; maximum-scale=1.0; user-scalable=0;"> <style type="text/css"></style></head> <body marginwidth="0" marginheight="0" topmargin="0" leftmargin="0"> <h1>hello</h1> </body> </html> My cache.manifest is simply CACHE MANIFEST I run this website on my local server (localhost). I load it from iphone safari and it works fine. I then stop the server and load it again, and it works, because the offline cache is doing its job. However... if I save the website as a start icon in the iphone dashboard, and then I try to open it with the server stopped it won't load. However... if I open it with the server running at least once (it will work) then I can open it later without problem. It looks like even though the page was cached in safari, it is not cached in this saved app. Anybody knows how to get around this? Thank you!

    Read the article

  • Setting up fastcgi on an Ubunutu server (socket file permissions issue)

    - by gray alien
    I am trying to set up mod_fcgid on my server. Part of the requirement is that Apache needs to create a socket file for mod_fcgid. I specified the folder for Apache to write the socket data to: /var/run/apache2/fcgid I then specified this file in my fcgid.conf file as follows: SocketPath /var/run/apache2/fcgid/sock I then changed the owner of the folder to www-data (the apache user) and gave the owner full permissions to the folder and its contents. I was able to run my test fcgi app then. When I rebooted the machine, y fastcgi app no longer worked. After some investigation, I found that ownership of /var/run/apache2/fcgid has been reset to root, and with permission reset to 700 I have the following questions: Is there something specific about the /var/run folder? why is the permissions being reset after a reboot? Should I move my socket file to another location (in case root automatically takes ownership of contents in this folder for security reasons?) I am running Ubuntu 10.0.4 LTS 64 bit

    Read the article

  • Windows 7 and Ubuntu Boot issue

    - by user115137
    I had the idea to dual boot Win 7 and Ubuntu and what I did was the following: Made a clean install of win 7 using all of my hard drive, next I used the Ubuntu live cd and gparted to partition my drive to be the following: /dev/sda1 ext4 20GB (Linux root) /dev/sda2 ntfs 100GB(Win7) /dev/sda3 ext4 350GB(Home) /dev/sda4 extended 4GB(swap) The thing is, when installing ubuntu I deleted the partition win 7 creates for its boot sector and recovery and then resized the drive to look like what I mentioned, and Ubuntu installed GRUB to the MBR. When GRUB boots I can see Ubuntu but not Windows, how can I chainload it? Or should I fix the windows mbr with the windows 7 installation disk and try to set the dual boot from there? I don't really care which one of the 2 bootloaders I end up using, I just want the dual boot to work out. Thanks

    Read the article

  • When Canonicalization is an Issue

    Although extremely hard to pronounce, canonicalization is a hot topic right now. If there are a lot of URLs that lead to pretty much the same page, you're going to make the search engines work extra hard and spend a lot more time crawling all the different URLs. Often times, this means that they'll miss the important pages of your website because your crawl time is limited or too slow.

    Read the article

  • Dual monitor, permission issue

    - by cenna75
    I had a dual monitor configuration going on for quite some time. One day, after moving the computer to another location and reconnecting everything, it changed such that I saw everything in double (being very much sober), I think it's called the 'mirrors' config. Anyway, from there on, there was nothing to be done through the system settings gui to change it back, as it wouldn't allow me to save any modification. The error I get when clicking 'save' is : "Failed to create file /home/me/monitors.xml.xxxxxx. Permission denied", xxxx being a random code, changing everytime. However, I can save all the configurations I want just fine by using the terminal, in my case: xrandr --output DVI-I-1 --right-of VGA-1 So I do have a workaround and this is therefore a question more out of curiosity. What could possibly have changed to make it impossible to do it through the gui and still letting me change the config using xrandr without being root? I'm having a hard time believing it could have anything to do with disconnecting/reconnecting the monitors... Any idea? Thanks !

    Read the article

  • Basic Java drawing program: issue with squares

    - by Caminek
    I'm trying to create a simple drawing program that creates a square where the user clicks and drags. The square displays correctly as long as either x or y remain positive with respect to the original click position. If both x and y are negative with respect to the original click position, the square grows/shrinks, but also wanders about the screen. Is there a way to swap the origin point from top-left to bottom-right or to keep the square from wandering?

    Read the article

  • Weird internet connection issue on Ubuntu 14.04

    - by user287876
    I have an old Gateway PC with Windows 7 installed alongside the original (XP, I think?). A while ago my friend walked me through and helped me install Ubuntu 14.04 alongside the others because my Win7 was having issues with the display driver (it would unexpectedly crash while trying to watch videos on YouTube or other places). And I can't update it because the original computer settings are for XP or something, not Win7. :( We recently switched from AT&T U-Verse to Comcast. I would have stayed with Ethernet, but somehow the adapter I have wouldn't connect to Comcast's thing during their installation. So I was given a wireless USB adapter. It worked fine, up until the last few days. It's not a problem on Win7 (using it right now). Connection's strong, things load. On Ubuntu though, it SAYS it connects even before I login to my account, but when I login and bring up Firefox, it will load the homepage, and maybe one or two other pages I venture to before suddenly just... endlessly trying to load the page. I would normally go in and manually select 'disconnect' from the connections options menu to refresh/restart it like I've done a few times already. But lately, it won't respond, and then a little while later an error message comes up saying the request timed out/failed. Restarting my computer doesn't help it. The other weird thing is that I've noticed the signal (when it's actually working properly before the last few days) is comparitively weaker than when I'm on Win7. But my location doesn't change. It's the same computer, same connection.

    Read the article

  • Sparxsystems Enterprise Architect and issuses/tasks tracking system?

    - by peperg
    We (developement team 4 upto 7 people) use Sparxsystems EA for requirement analysis, modeling etc. Do you know any well-working methods to use EA and some task/issue tracking system like Redmine/Mantis/Trac ? The problem is not to duplicate functionality (there are issues/tasks/changes in Redmine and EA) but to have some user friendly interface (preffered web) to manage tasks and issues. By user-friendly I mean "my tasks" page add effort / time tracking easy "add issue" by everyone (simple bug tracking system) mail notifications

    Read the article

  • Good hosted sites to manage quality testing?

    - by Chirag Patel
    Basically, I would like to manage quality testing with an issue management system focused on quality testing? I can't use the typical issue management system such as Lighthouse, FogBugz because each test is written as a ticket and 20 to 30 tickets need to be duplicated (w/ no history) every time we start a quality cycle. Do any (hosted) sites exist? We're currently using a Google Spreadsheet so it can be collaboratively edited.

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? ---update 2-6-11 Since I have not received any responses except the one below which appears to misunderstand my point, I am updating this post hoping to get more responses. I have used the terminal command sudo opensnoop -p PID where PID is the mdworker process ID to try and determine what Spotlight is doing and hopefully find the files it's having trouble with. Here's what happens: After indexing for a few hours, mdworker is gone. It no longer shows up in Activity Monitor under "All Processes" and the Terminal window with the opensnoop result stops moving. I then proceeded to execute the same command on mds to see what it was doing and here's what I get, repeatedly: 501 57 mds 21 / 501 57 mds 21 /Volumes/Sno Leppard 501 57 mds 21 /Volumes/Tiger 501 57 mds 21 /Volumes/Leppard 501 57 mds 21 /Volumes/Disk Warrior 501 57 mds 21 /Volumes/ONM Data These represent all the volumes currently mounted in the system. All except ONM Data, which is the one I am trying to index, are excluded from SPotlight indexing at the moment. The sequence above repeats over and over, with slight variation, sometimes skipping one of the volumes. Questions - what happened to mdworker? What is mds doing? I will let this run until tomorrow morning and throughout the day and monitor for any changes. Any input would be very much appreciated. Even if you're not sure what the ultimate answer is, please alert me to anything you think I may be missing. Hopefully at some point we will figure this out... Thanks, M __final edit__ I finally resolved the issue and here is how I did it. I used the terminal command "sudo opensnoop -p PID" where the PID is the process id of the processes I was monitoring. I was looking at all instances of mds and mdworker running in the system. After the third time through indexing the same data set (see info above), I contacted Apple and got to their highest level of support - they were flabbergasted as well. They advised me to install yet another default 10.6.6 system and try again. The same pattern repeated - mds and mdworker(s) would start indexing and eventually the spotlight icon would say 6 hours remaining and all mdworkers were gone, mds at 90% or so of CPU. But I did finally figure out that the first time mdworker stopped like that, the last file it touched was always in the same folder. I excluded that folder from spotlight search and the rest of the data set indexed within about 2 hours with no strange behavior or failures. I copied that folder to another machine and Spotlight barfed immediately. Exclude that folder and all is well again. I have no clue what is causing this behavior, still, but I did find a functional solution to the problem. Anyone with a similar problem - run opensnoop on all instances of mds and mdworker and wait patiently for wdworker to exit. Look at the last file it touched and exclude the enclosing folder from being indexed. I was able to repeat the issue and solution on 2 different installs and 2 different copies of the data set. Hope this helps. If we find an actual cause of the folder being such a problem (it is called MICHAEL BRECKER RECORD SOLOS and contains almost 1 GB of audio related files - performer, live, SD2 - things like that), I will edit again to let you all know. Thanks for ay attempts to help, M

    Read the article

  • Issue with www to non www redirect

    - by bob
    Hello, I am on slicehost and I followed the articles that they gave for DNS redirection and the www to non www url redirection does work. However, what if I want a www.domain.com to be the default domain. Would I put www.domain.com. as my DNS record name or would I keep domain.com. as my DNS record and then do something else. Basically, what happens is if someone goes to the url www.domain.com/directory/something.html they will be redirected to domain.com and not domain.com/directory/something.html I would like the second thing to happen, not just go to domain.com and call it a day. I am running nginx and am confounded on how to solve this issue. I'm not sure whether its an nginx issue or a dns issue. Any help would be greatly appreciated!

    Read the article

  • Steam app and age verification issue

    - by TronicZomB
    I have OSX 10.9.4 with the Steam App (API v016 and package versions 1407966480) installed. When I would visit game pages that required age verification, my birthday would be filled in due to being logged in and I would just click OK to move past. Now the age verification shows up with January 1st, 2014 each time and it will not let me change the date. The drop down menus just show up blank. I have tried to log out of my computer, log out of Steam, restart Steam, restart my computer, and reinstall Steam. Nothing has worked and it continues to have this issue. When I visit the Steam website, the age verification drop downs work just fine. What is causing this issue? And/or how can I fix this issue?

    Read the article

  • Issue with Toshiba Satellite C50 A-1JM

    - by Nathan Hawkes
    I am having some odd problems with my Toshiba laptop. For the last two days, it has not loaded past the boot screen for hours, if at all. When I force the laptop off and switch it back on, it goes to the Preparing Automatic Repair process but will not complete the process and go to the desktop. However, this only happens when the battery is plugged in. When I remove the battery and run the laptop on power, it works without issue. Would this be a battery issue (given it works without the battery) or a hard drive issue (given it won't get past the boot screen)? If neither, what would you suggest as a solution?

    Read the article

  • Issue with emails with attached emails.

    - by Jake
    There is this problem with our email in my organisation that happens to some people. When a remote sender sends an email that has an attached email, the reciever gets the email but the attached email is blank. The recieving mail server is MDaemon Pro. I also notice that the email header could be corrupted. I checked the MDaemon KB and find nothing regarding this issue. but I also highly doubt that this is an MS Outlook 2007 issue. Anyone have any ideas? Putting this issue aside, I feel that we really should not attach emails to emails. There is a reason for the "Forward" button. I can't understand why is it so difficult for them to just forward that email instead of drag and drop one into the other using outlook. Furthermore, if the attached email also has its own attachments, the resulting nesting will be quite unbearable. Don't you think so?

    Read the article

  • SQL 2008 R2 Mirroring Issue

    - by CWL
    Windows 2008 R2 with SQL 2008 R2 - Using Mirroring of a Database across the WAN in a HA setup with one witness. One issue I am having is during a failure (ever so often) the system fails over or tries, but leaves both databases in a Restoring State. My guess is the failover issue happens when there is a WAN bouncing and the systems get confused. The usual fix is to reboot the sql servers. Has anyone seen this type of failure? While this does not happen often it does causes an issue and concern with HA not being trusted fully.

    Read the article

  • Strange DNS issue with internal Windows DNS

    - by Brady
    I've encountered a strange issue with our internal Windows DNS infrastructure. We have a website hosted on Amazon EC2 with the DNS running on Amazon Route 53. In the publicly facing DNS we have the wildcard record setup as an A record Alias pointing to an AWS Elastic Load Balancer sitting in front of our EC2 instances. For those who are not aware, the A record Alias behaves like a CNAME record, however no extra lookup is required on the client side (See http://docs.amazonwebservices.com/Route53/latest/DeveloperGuide/CreatingAliasRRSets.html for more information). We have a secondary domain that has the www subdomain as a CNAME pointing to a subdomain on the primary domain, which resolves against the wildcard entry. For example the subdomain www.secondary.com is a CNAME to sub1.primary.com, but there is no explicit entry for sub1.primary.com, so it resolves to wildcard record. This setup work without issue publicly. The issue comes in our internal DNS at our corporate office where we use the same primary domain for some internal only facing sites. In this setup we have two Active Directory DNS servers with one Server 2003 and one Server 2008 R2 instance. The zone is an AD integrated zone, but it is not the AD domain. In the internal DNS we have the wildcard record pointing to a third external domain, that is also hosted on Route 53 with an A record Alias pointing to the same ELB instance. For example, *.primary.com is a CNAME to tertiary.com, so in effect you have www.secondary.com as a CNAME to *.primary.com, which is a CNAME to tertiary.com. In this setup, attempting to resolve www.secondary.com will fail. Clearing the cache on the Server 2003 instance will allow it to resolve once, but subsequent attempts will fail. It fails even with a clean cache against the 2008 R2 server. It seems that only Windows clients are affected. A Mac running OSX Mountain Lion does not experience this issue. I'm even able to replicate the issue using nslookup. Against the 2003 server, with a freshly cleaned cache, I recieve the appropriate response from www.secondary.com: Non-authoritative answer: Name: subdomain.primary.com Address: x.x.x.x (Public IP) Aliases: www.secondary.com Subsequent checks simply return: Non-authoritative answer: Name: www.secondary.com If you set the type to CNAME you get the appropriate responses all the time. www.secondary.com gives you: Non-authoritative answer: www.secondary.com canonical name = subdomain.primary.com And subdomain.primary.com gives you: subdomain.primary.com canonical name = tertiary.com And setting type back to A gives you the appropriate response for tertiary.com: Non-authoritative answer: Name: tertiary.com Address: x.x.x.x (Public IP) Against the 2008 R2 server things are a little different. Even with a clean cache, www.secondary.com returns just: Non-authoritative answer: Name: www.secondary.com The CNAME records are returned appropriately. www.secondary.com returns: Non-authoritative answer: www.secondary.com canonical name = subdomain.primary.com And subdomain.primary.com gives you: subdomain.primary.com canonical name = tertiary.com tertiary.com internet address = x.x.x.x (Public IP) tertiary.com AAAA IPv6 address = x::x (Public IPv6) And setting type back to A gives you the appropriate response for tertiary.com: Non-authoritative answer: Name: tertiary.com Address: x.x.x.x (Public IP) Requests directly against subdomain.primary.com work correctly.

    Read the article

  • Moving from ColdFusion 8 to ColdFusion 10 - Migration Fails

    - by XenoFoxx
    After having made several attempts to migrate from a ColdFusion 8 Standard server to a ColdFusion 10 Standard server, it feels like I am "almost" there. I'm using the 64 bit installer from Adobe's website. I'm using a Windows Server 2008 (64 bit) server with IIS 7.0. The installation itself goes smooth and the services start and are running. But at the end of the installation it says "ColdFusion Installed, but with errors" and it generates a log file. The log file reads: Migration Error: : Check that "C:\ColdFusion8" is a valid directory and is an installation of either ColdFusion MX 6 or ColdFusionMX 7 and further down says: Status: WARNING Additional Notes: WARNING - Could not migrate settings from previous version of ColdFusion Custom Action: com.macromedia.ia.action.MigrateColdFusionAction Status: ERROR Additional Notes: ERROR - class com.macromedia.ia.action.MigrateColdFusionAction NonfatalInstallException null The applicationHost.config file has new XML referencing the ColdFusion 10 directory, but IIS is still using ColdFusion 8. I'm also going to guess that the settings in the CF Administrator have not been migrated based on the message in the log above. I've followed the instructions on Adobe's site, including ensuring that ASP.NET, CGI, ISAPI Extensions, and ISAPI Filters are all enabled. I've also enabled IIS 6 Metabase Compatibility even though I don't think it's needed. Has anyone else had similar issues with ColdFusion 10 and IIS 7. Currently I have uninstalled CF 10 and reverted back to

    Read the article

  • Why does my motherboard go through an endless reboot cycle when 8 GB of memory is attempted vs 6 GB?

    - by nizm0
    I never got an answer in my googling to this about a year ago and have an extra stick of memory I'd like to be able to use. When is inserted the computer starts, and then reboots immediately in an endless reboot cycle. As soon as the 4th stick is removed, the computer works fine. Right now I have 6 GB of my 8 GB installed. Is there a solution that I am missing to enabling this motherboard to actually boot up with all 8 GB (it supports it). Right now it won't even boot up to BIOS with the 4 sticks... only 3? Memory: 1 x G.SKILL 4 GB (2 x 2 GB) 240-Pin DDR2 SDRAM DDR2 1100 (PC2 8800) Dual Channel Kit Desktop Memory Model F2-8800CL5D-4GBPI - Retail (URL: ) http://www.newegg.com/Product/Product.aspx?Item=N82E16820231194 Motherboard: 1 x GIGABYTE GA-EP45-UD3R LGA 775 Intel P45 ATX Intel Motherboard - Retail ( URL: )http://www.newegg.com/Product/Product.aspx?Item=N82E16813128359

    Read the article

  • Updated XAMPP with MySQL, all my tables are missing

    - by user371699
    I just updated XAMPP to a newer version, which included updating MySQL from 5.5 to 5.6. Using phpMyAdmin, however, all of my tables within my databases still appear on the left navigation panel, but the main window shows that all my databases are empty (except for information_schema, and a couple other default tables.) Clicking on a table in the navigation panel gives me a "table doesn't exist" message. It does looks like information_schema.tables doesn't have my tables, either. Can anyone assist me with this? I did make a complete backup of all my databases before the upgrade, but I first want to see if I can fix this the "normal" way. Furthermore, I'm not sure if the MySQL upgrade involved making changes to the information/performance databases, so I don't know if I can restore the old ones. Thank you. EDIT: Continuing my searching, I realized that only the INNODB databases are missing. I've tried running the following with no avail: /opt/lampp/bin $ sudo ./mysql_install_db --basedir=/opt/lampp and /opt/lampp/bin $ sudo ./mysql_install_db --basedir=/opt/lampp --datadir=/opt/lampp/var/mysql The my.cnf file in /opt/lampp/etc contains the following InnoDB settings: innodb_data_home_dir = /opt/lampp/var/mysql/ innodb_data_file_path = ibdata1:10M:autoextend innodb_log_group_home_dir = /opt/lampp/var/mysql/ # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 16M # Deprecated in 5.6 #innodb_additional_mem_pool_size = 2M # Set .._log_file_size to 25 % of buffer pool size innodb_log_file_size = 5M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 1 innodb_lock_wait_timeout = 50 What could possibly be wrong? Why is the information_schema not updating correctly? It looks like /opt/lampp/var/mysql has all my tables in it within the database directories, but they're still not showing up in information_schema.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >