Search Results

Search found 3632 results on 146 pages for 'deleted'.

Page 134/146 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • Cannot Start MySQL Server on Fresh MAMP Install

    - by alexpelan
    I'm using Mac OS X 10.6.2 on my Macbook Pro. I can get the apache server to start, but not the mysql server, on both the default apache and default MAMP ports. When I try to go to my start page, I get the message "Error: Could not connect to MySQL server!" . Here's what's in my mysql error log: 00513 02:00:07 mysqld_safe mysqld from pid file /Applications/MAMP/tmp/mysql/mysql.pid ended 100513 02:00:16 mysqld_safe Starting mysqld daemon with databases from /Applications/MAMP/db/mysql 100513 2:00:16 [Warning] The syntax '--log_slow_queries' is deprecated and will be removed in a future release. Please use '--slow_query_log'/'--slow_query_log_file' instead. 100513 2:00:16 [Warning] You have forced lower_case_table_names to 0 through a command-line option, even though your file system '/Applications/MAMP/db/mysql/' is case insensitive. This means that you can corrupt a MyISAM table by accessing it with different cases. You should consider changing lower_case_table_names to 1 or 2 100513 2:00:16 [Warning] One can only use the --user switch if running as root 100513 2:00:16 [Note] Plugin 'FEDERATED' is disabled. 100513 2:00:16 [Note] Plugin 'ndbcluster' is disabled. InnoDB: Error: log file /usr/local/mysql/data/ib_logfile0 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 16777216 bytes! 100513 2:00:16 [ERROR] Plugin 'InnoDB' init function returned error. 100513 2:00:16 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100513 2:00:16 [ERROR] /Applications/MAMP/Library/libexec/mysqld: unknown option '--skip-bdb' 100513 2:00:16 [ERROR] Aborting 100513 2:00:16 [Note] /Applications/MAMP/Library/libexec/mysqld: Shutdown complete 100513 02:00:16 mysqld_safe mysqld from pid file /Applications/MAMP/tmp/mysql/mysql.pid ended A couple of things: 1) There are a bunch of different .cnf files that come with MAMP (my-huge, my-medium, etc.)...how can I tell which one is actually being used? 2) I deleted the ib_logfile0 and ib_logfile1 as recommended by another post on serverfault, and then ended up with more errors: 100519 16:01:30 InnoDB: Log file /usr/local/mysql/data/ib_logfile0 did not exist: new to be created InnoDB: Setting log file /usr/local/mysql/data/ib_logfile0 size to 16 MB InnoDB: Database physically writes the file full: wait... 100519 16:01:30 InnoDB: Log file /usr/local/mysql/data/ib_logfile1 did not exist: new to be created InnoDB: Setting log file /usr/local/mysql/data/ib_logfile1 size to 16 MB InnoDB: Database physically writes the file full: wait... InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100519 16:01:31 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 100519 16:01:31 InnoDB: Started; log sequence number 0 44556 100519 16:01:31 [ERROR] /Applications/MAMP/Library/libexec/mysqld: unknown option '--skip-bdb' 100519 16:01:31 [ERROR] Aborting And then I got this the next time I tried to run it: InnoDB: Unable to lock /usr/local/mysql/data/ibdata1, error: 35 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. Sorry that this is a lot of information, but I don't want to leave anything out. Thanks.

    Read the article

  • opennms postgres connection slow

    - by krisdigitx
    i am running the opennms application server on a physical server and the database on an ESXi VM. Recently the opennms webconsole has been very slow to load as such i deleted most of the events from the database table, now both servers have no load at all, and the psql connection from the application server to the database server is also very fast, but somehow opennms webconsole is still slow. this is the strace from the opennms process id: 18629 futex(0x2aaac77d8a84, FUTEX_WAIT_PRIVATE, 453, NULL <unfinished ...> 3015 futex(0x2aaabc4a2ee4, FUTEX_WAIT_PRIVATE, 323, NULL <unfinished ...> 10863 futex(0x2aaabbebaa94, FUTEX_WAIT_PRIVATE, 395, NULL <unfinished ...> 25260 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 10859 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 10982 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 3011 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 25260 futex(0x2aaae098fc28, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 10982 futex(0x2aaac0eaf928, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 3011 futex(0x2aaab0cb1728, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 10859 futex(0x2aaac062c328, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 25260 <... futex resumed> ) = 0 10982 <... futex resumed> ) = 0 3011 <... futex resumed> ) = 0 10859 <... futex resumed> ) = 0 25260 futex(0x2aaabc38b6b4, FUTEX_WAIT_PRIVATE, 443, NULL <unfinished ...> 10982 futex(0x2aaabc5d7b94, FUTEX_WAIT_PRIVATE, 99, NULL <unfinished ...> 3011 futex(0x2aaac7c55334, FUTEX_WAIT_PRIVATE, 183, NULL <unfinished ...> 10859 futex(0x2aaabbb8c9d4, FUTEX_WAIT_PRIVATE, 347, NULL <unfinished ...> 10846 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 10846 futex(0x2aaae9022428, FUTEX_WAKE_PRIVATE, 1) = 0 10846 futex(0x2aaabe0030b4, FUTEX_WAIT_PRIVATE, 251, NULL <unfinished ...> 20281 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 14100 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 2925 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 10843 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 20281 futex(0x2aaac7e93628, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 14100 futex(0x2aaac04e8c28, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 2925 futex(0x2aaaec085528, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 10843 futex(0x2aaab20b0528, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> and shows lots of connection timeout??? i think its the connection between the java application and database which is causing issues. any ideas how to troubleshoot this???

    Read the article

  • Issues configuring Exchange 2010 as well as SSL problems.

    - by Eric Smith
    Possibly-Relevant Background Info: I've recently moved up from icky shared hosting to a glorious, Remote Desktop-administrated VPS server running Windows Server 2008 R2. Even though I'm only 21 now and a computer science major, I've tried to play with every Windows Server release since '03, just to learn new things. What usually happens is inevitably I'll do something wrong and pretty much ruin the install. You're dealing with an amateur here :) Through the past few months of working with my new server, I've mastered DNS, IIS, got Team Foundation Server running (yay!), and can install all of the other basics like SQL Server and Active Directory. The Problem: Now, these last few weeks I've been trying to install Exchange Server 2010 (SP1). To make a long story short, it took me several attempts, and I even had to get my server wiped just so I could start fresh since Exchange decided uninstalling properly was for sissies (cost me $20, bah). Today, at long last, I got Exchange mostly working. There were two main problems left, however, that left me unsatisfied: Exchange installed itself and all of its child sites into Default Web Site. I wanted to access Exchange via mail.domain.com, but instead everything was configured to domain.com. My limited server admin knowledge was not enough to configure IIS or Exchange to move itself over to the website I had set up for it, appropriately titled 'mail.domain.com', which I had bound to a dedicated IP address (I was told this was necessary, but he may have been wrong). I have two SSL certificates: one for my main domain and one for my mail subdomain. For whatever reason, I had issues geting Exchange to use my mail certificate, even though I had assigned the proper roles in the MMC. I did, at one point, get it to work (or mostly work, anyways. Frankly, my memory of today is clouded by intense frustration). Additionally, I was confused which type of SSL certificate I should be using for Exchange. My SSL provider, GoDaddy, allows me to request a new certificate whenever, so I can use either the certificate request provided by IIS or the more complicated and specific request you can create with Exchange. Which type should I be using, the IIS or Exchange certificate? If I must use the Exchange certificate, will that 1) cause issues when I bind that certificate to my mail.domain.com subdomain or 2) is that an unnecessary step? The SSL Certificate Strikes Back When I thought I had the proper SSL certificate assigned for those brief, sweet moments, Google Chrome reported the correct mail.domain.com certificate when browsing https://mail.domain.com. However, Outlook 2010 threw up an error when trying to configure my email account claiming that the certificate didn't match the domain of "mail.domain.com". Is this an issue that will be resolved by problem #2 or is it a separate one entirely? Apologies for the massive wall of text, but I wanted to provide as much info as I possibly could. Exchange is the last thing I'd like installed on my server, and naturally it's turning out to be the hardest. Thanks for any info at all. Even a point in a vague direction would be a huge help at this point. Thanks! -Eric P.S.: The reason I keep ruining my install is that when I attempt to uninstall Exchange, something invariably goes wrong. The last time the uninstaller complained that there was still a mailbox active and it couldn't proceed until I deleted it. ... The only mailbox left was the Administrator account, the built-in one I couldn't delete. So I attempted to manually uninstall it following several guides online only to now be stuck unable to launch the installer and have to get my system wiped AGAIN for the second time today ($40 down the drain, bah!). I do not understand at all why "uninstall" just can't mean "hey, you, delete everything and go away". There's not even a force uninstall option, only a "recover system" option that just fails to fix anything and makes it so I can't even use the GUI uninstaller. </rant>

    Read the article

  • What would cause an .exe to vanish without a trace?

    - by Peter pete
    I have a few computers. One computer, at home, one day suddenly had its pgbouncer.exe vanish. The antivirus didn't have it in its virus chest [avast]. I couldn't find the bgbouncer anywhere. All the other pgbouncer files remained where they used to be, except the exe had vanished. I hadn't uninstalled it, nor had anyone else used the machine. I hadn't installed any new software since the previous time I had used it either. Just now, however, my TV computer was running out of disk space, which was weird because I had python setup to do transcoding and archiving. I logged in and voila! python.exe had vanished !! Once again, Avast.exe didn't have it in its virus chest, and dunno! I do this time, with the TV computer, know exactly the date that python vanished. Sat the 18th my python scripts ran fine. Sat the 19th python was gone. I'm going to do some hunting in the event log to see what happened that day. But if anyone has experienced vanishings before and has a clue what happened, I would like to know. FYI: Both computers (bgbouncer vanish and python vanish) were running Win7 with the RDP hack and both on SSDs and both with Avast. Both computers had all windows updates set to manual(to prevent random stuff changing!) and neither had recently had any windows updates manually applied. FYI2: Tv computer had since the beginning of October dropbox running all the time trying to download two files. Sadly, a temporary download of each of these two files by dropbox resulted in Avast freaking out and virus-chesting them, and then dropbox downloading them partially again, before being dropboxed. Now, these two files were binaries from a program I had personally written and were clean on other machines. Since the python vanishing I have deleted these two binaries from dropbox (using the website) and dropbox exe on the tv computer is now at peace. I don't think this should cause python.exe to vanish though :/ New edit: On the 18/10/2013, at 0742 my python script ended with an error: "file still in use" which was unexpected but I shrugged it off since sometimes media portal doesn't release the recording. But on second thoughts is weird since the show in use would have been finished recording the day before. On the 18/10/2013 at 0807 the windows event log complains that several drivers required for CutePDF, send to onenote 2010, send to onenote 2013, microsoft xps document writer aren't installed. I just checked now and indeed those printers have vanished! New update! I found my python.exe that had been removed. It was still in the C:\Python33\ directory except it had been renamed to a random string charater.tmp (ie, it was made into a temp file) with a creation date of 19/10/2013 at 0600:02 am. Now, the computer normally wakes at 6am to do transcoding. What could have moved my python file into a tmp file?

    Read the article

  • Symantec Protection Suite and System Recovery 2011 Desktop Edition

    - by rihatum
    I am re-posting this as my previous question was being treated as if I am "Shopping or seeking Product Recommendations" even though I was NOT - BTW they have deleted my comments too which were not offensive in nature. anyway - I have re-phrased some parts of my question and I hope SF Admins "Do Not Modify / Edit" this one - will be most grateful for that. I have a lot of respect for the People who visit this SITE and help others ! Just To clarify : Just to go by SF rules - I am not seeking someone to Design this solution, I am simply seeking real world examples, experiences, technical expert opinions / suggestions, any tips or tricks they may have or any problems they may have faced while doing something similar above with these products. I am also not asking for Capacity Planning for Storage, We have done some research and I am seeking Expert Assurance / Suggestions. We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Server Hardware : Per SQL Server : 16GB RAM + SAS DISKS + Dual XEON, RAID-10 for the SQL DB or I can always mount a LUN from our existing Hitachi or EMC SAN. SEPM Server : 16GB RAM + SAS DISKS + DUAL XEON System Recovery MGMT SERVER : 16GB RAM + SAS DISKS + DUAL XEON Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) We have netbackup in our environment backing up other servers, I am planning to protect these 4 (2 x SQL, 1 x SEPM, 1 x System Recovery Mgmt. Server) via netbackup or I can use System recovery 2011 server edition on all 4 of these boxes as well. (License is not an issue as we have the complete symantec portfolio included in our license). d) Now - Saving Desktop backups - What strategies have you implemented ? Any best practice recommendation for a large user base ? I was thinking to either mount a LUN from our Hitachi SAN on the Symantec Recovery Server itself or backup to the users hard drive locally and then copy it over to a network location ? Suggestions welcome :-) If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks !

    Read the article

  • Can not access network computers anymore

    - by Johny Skovdal
    Last Thursday (03/05/12) I got a new computer to be able to work from home. I plugged it, by cable, into the company network and installed most of the software needed for me to do so, by accessing a share on my stationary computer at work. I had no issues here what so ever, and everything just worked. Yesterday evening I tried accessing the company network trough Windows VPN, and while I was able to connect to the network, I was unable to connect to any computers on the network. I did, however, get an error when connecting, but I can't seem to get the error again, to get the details of the error message. Today I am sitting on the company network again, and now I can not access anything on the network like I could last Thursday, though I can ping all the computers I am attempting to access. Here is a list of details that might help in troubleshooting this issue (updated): List of observations / actions My computer is identical to another computer that has no issues. It is not on the domain but rather on the default workgroup, but this was not an issue last Thursday, so I am assuming it still is not. I am able to access my e-mail on the exchange server. I can connect to our TFS server from Visual Studio but not from Explorer. I can also connect to Database Servers and Remote Desktop. I can see several computers when browsing network computers, but I am unable to connect to any of them. When trying to connect to a computer I am consistently met with the error code "0x80070035" (network path not found). I also get the 0x80070035 error when double clicking the target computer from the Network UI. I am not met with a login dialog when trying to access a computer, as I should, since I am not on the domain. (I did login to both Exchange, Remote Desktop and TFS though) Between Thursday where it worked and Sunday evening where it did not, I have installed quite a few security updates, plus various tools etc. that I need for programming. I have tried accessing by computer name and ip and neither of them work. I can ping by computer name. I have deleted all (1 entry) stored network credentials. I am able to access my computer from the target computer. Client and Server can see each other on the network = Network Discovery is enabled. I am using the network profile "Work". When accessing the network through VPN, I am unable to get anything to work using computernames, but all of the above applies when using IP adresses instead of computername. I run Windows 7 Home Premium on my computer. Using powershell attempting to access a share I get the following error (ComputerName and ShareName being correct values of course): PS C:\Users\MyUser> cd \\ComputerName\ShareName Set-Location : Cannot find path '\\ComputerName\ShareName' because it does not exist. At line:1 char:3 + cd <<<< \\ComputerName\ShareName + CategoryInfo : ObjectNotFound: (\\ComputerName\ShareName:String) [Set-Location], ItemNotFoundException + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.SetLocationCommand However, ping'ing the same machine (ping ComputerName) from powershell I get response immediately. (As mentioned in the list of observations/actions, I tried the above with the IP address again on VPN, to get the same result) Conclusion So to sum up, pretty much the only thing I can not do, is access the other computers through browsing (explorer.exe, powershell, map networkdrive, etc.), which means that I am pretty much down to, that it is unable to resolve the path somehow, when trying to connect to other computers trough browsing, though the path gets resolved perfectly using all kinds of other services. Any recommendations as to what I can try next to resolve the issue? :)

    Read the article

  • Can I make my drives visible and change their partition type without losing my data?

    - by user165408
    I have made a lot of mistakes and now I cannot see my hard disk nor I can start my operating system on my laptop. All my passwords and important files on my hdd without any backup. I followed this course of action Changed my hard disk partitions to dynamic just for getting 5th partition. (1st mistake) Decreased partitions to 4 again. Backed up operating system from 4th to 3rd partition with Norton Ghost. Booted from a live CD for Windows XP. Formatted 4th partition and moved my all important data from 1st and 2nd partitions to the 4th partition. Deleted 1st and 2nd partitions and got 1 partition from half of empty space. So I have just 3 partitions and empty space between 1st and 2nd partitions. Tried to install Windows 8 to the first partition but it did not allow because it is dynamic. Also it did not allow to install to other partitions. Tried to install Windows XP to the 1st partition but it said if I continue I cannot use other drivers. Therefore I escaped from installing it. Booted from the Windows XP live CD then increased 1st partiton to less than 400mb of empty space. Therefore I thought it will be adjacent but it was shown as 2 partitions. In my computer I see just 3 drivers. Using Norton Ghost I recovered my OS to the 1st partition. (2nd mistake it was on 4th partition originally) Booted from a Windows XP live CD I tried to install bcdedit to the Windows XP live CD but it did not work. Then I tried to install EaseUS Partition Master Home Edition. It was installed with errors then I start it and it showed me an error like there is no hard disk. I looked to my PC and my drivers were not there. Booted from the Norton Ghost CD and it did not show me my drivers either, but before I was able to see them. I checked numbers of partition shown by the Norton Ghost utility and they are still have same numbers so I have to see my drivers but I cannot see them now. My hard disk is shown as extarnal dynamic now so I cannot see any drive in my PC in the live Windows XP. There are two options; first one is import extarnal disk and second one is convert disk to basic. Will they delete my data? I fear booting from CDs like Windows XP live CD, Norton Ghost CD, and the operating system CD/DVD, because they may overwrite a few MB their data to my data. These recover tools are already exist in Windows XP live CD by The Ultimate Boot CD for Windows. Can any of them help me? CompuAppa SwissKnife V3 DBXtract Disk Investigator Fab's AutoBackup 2.0 FileRecovery Floppy Repair Free Undelete Handy Recovery Recovery Manager Restorastion Restorastion Help File by UBCD4Win UnChk Unstoppable Copier Finally How can I make it so that my drives are visible again without losing my data? How can I convert my dynamic partitions to basic without losing my data?

    Read the article

  • Windows Media Player won't launch on Vista - how to repair or reinstall it?

    - by rpm1200
    My friend asked me to look at her Acer Aspire laptop with Vista Home Premium as it is no longer playing DVDs. I found that Windows Media Player would not launch. I found this thread, which contained a number of suggestions, none of which solved the problem. Here is what I tried: Tried running WMP via desktop shortcut, QuickLaunch bar or going to Program Files\Windows Media Player\wmplayer.exe. In all cases, wmplayer would launch then terminate immediately (verified through the Processes tab in Task Manager). Tried running wmplayer.exe as Administrator. The UAC dialog would come up, I'd approve, then wmplayer would launch and terminate immediately. Uninstalled all non-Microsoft media programs except RealPlayer, iTunes, QuickTime, Acer Arcade (the laptop owner uses all those apps). Tried running Program Files\Windows Media Player\setup_wm.exe as Administrator, it launched but said that a newer version of WMP was already installed. Deleted the "Windows Media" folder located under %userprofile%\appdata\local\Microsoft then tried starting WMP - wmplayer would launch and terminate immediately. Register wmp.dll by typing "regsvr32 wmp.dll" in an Administrator cmd window then tried starting WMP - wmplayer would launch and terminate immediately. Run "SFC /SCANFILE" in an Administrator cmd window - get an error message that it found invalid system files and could not fix them, so look at the log file cbs.log. The log file shows that there are broken files associated with Windows Sidebar (which the user does not use) but none relating to WMP. Log off to safe mode and run "SFC /SCANFILE" in an Administrator cmd window again - same results. Try to download and install XP WMP - the microsoft.com site recognizes the OS as Genuine and allows the download, but when I launch the installer it says the system is not Genuine. Clicking the link directs me back to IE where I can authenticate the system as Genuine. The installer still fails to recognize the system as Genuine. It is a Genuine Vista installation. Try to run this update (KB931621). The installer said it did not apply to the system. Set Windows Media Player as default in Program Access and Defaults. Same results. Tried running "for %a in (%systemroot%\system32\wm*.dll) do regsvr32 /s %a" in an Administrator cmd window - same results. Went to this Knowledge Base article (947541) and ran the Microsoft Fix It. The Fix It ran successfully, but WMP would still launch and terminate immediately. Multiple reboots in the process of doing all of these steps. After all this, looked in the Application and Security logs. No events pertaining to WMP were logged. The computer was preinstalled with Vista Home Premium and I have the Acer backup DVDs which will reimage the drive. I do not have Vista install DVDs. Reimaging the system is not an option. I'd also rather not restore the system to an earlier point unless it's absolutely necessary. What else can I do to repair or reinstall WMP?

    Read the article

  • Hosed Windows 7 permissons

    - by Anthony
    Here is the most interesting thing I've noticed since the problems started: If I go into a control panel/system module (in this case the Resource Monitor) that has a "Check Online" type option, Firefox (my default browser) opens right up without a problem. But if I just start Firefox from any shortcuts (start menu, desktop, etc), the Firefox process starts up (and the start menu icon starts glowing) only to end without notice a few seconds later. Possibly related: If I start up in Safe-Mode (w/o Networking, but haven't tried with yet), I can start up FF or Chrome just fine, but if I attempt to open Chrome normally, I get a permissions error. Opera and Safari seem to be okay (mostly). Safari crashes when I try to download any files. All of the above leads me to believe that some (but clearly not all) core files have messed up permissions. Or rather, that I no longer have permission. System still does, based on Firefox opening without fail when the system initiates it. I've run MS Forefront once in normal mode, Malwarebytes twice in normal mode and once in safe-mode. One trojan found and deleted, but the problem persists. Two other things worth mentioning: I accidentally duplicated my library... I thought I'd try to add the "Internet" folder to my start menu, next to music and downloads. The first advanced thing I tried was "create new library". I clearly misunderstood what this means. I thought it was a way to add virtual folders to the library (which I thought, in turn, would allow me to choose it as a link on the start menu), but instead it recreated my already existing user folder, AppData and all. I didn't notice this until today. Then I tried setting permissions for my User folder to full control, recursively... Confused but not giving up,I thought I could maybe create a shortcut to the NetHood folder manually, but instead got hit with an access denied error. So I tried to change the permission levels for all sub-folders to my user folder so that I had full control. I got several access denied errors along the way. At this point I gave up, went out, ended up caught in the rain and stuck on a friend's couch and showing up late for work the next day. Thanks for nothing, Microsoft. When I finally got home today (20 hours later), I noticed that Firefox was acting really strange. I tried opening Chrome to see if the problem was client side or server side, and instead got the above-mentioned "you don't have permission to open this program" alert. And I think that's the whole story. Oh, I also did a system restore, but not chose a point from this morning (an auto update), and it worked but the problem wasn't fixed. And then all the earlier restore points were gone. So the questions are: a) is there a way to set the admin and user privs back to "default"? b) would this, in anyone's expert opinion, fix the problems I'm having? c) how come being logged in as an admin isn't the same as being logged in with admin privs? It seems that half the time I have to do run as admin for fairy standard things because i'm being treated as me-theuser and not me-theadmin. Thanks for reading.

    Read the article

  • Windows 7 disk errors after a few hours of runtime

    - by GFK
    I'm having trouble understanding what is going on with my work PC. Whenever I boot it, it runs fine for a while, then starts to randomly show disk errors. The displayed error often contains the message "not enough storage is available to process this command", although depending on the application that fails it can be different. This has happened for weeks now and is getting worse. This is what troubles me: It never seems to impact critical parts of the system (no BSOD, no freeze). Only some applications seem impacted, refusing to function correctly after a while: Outlook 2010 cannot download RSS feeds anymore, Firefox 6 or IE9 cannot download anything bigger than 3MB without failing, Windows Update fails, all msi installers fail, Visual Studio 2010 starts failing in weird manners... It only happens after a while using it (typically 3 hours, but it seems that installing a program or compiling several times makes it shorter) Rebooting solves it (temporarily). The system: The OS is Windows 7 Pro Spanish SP1, 32 bits The system is an HP Compaq 6000 Pro with 4 GB memory (only 3.4GB usable since the system is 32bit), one 500GB hard drive. Installed applications include: Visual Studio 2010, SQL Server 2008 R2, VMWare Workstation 7, Microsoft Security Essentials, Office 2010. Shutting down all related services and processes doesn't seem to change anything. The diagnostics I've run so far: Hard drive : 465GB, 165GB free Process Explorer : physical and virtual memory seem ok (pagefile is 5.3GB, physical memory usage 70%, system commit 39%) Windows Memory diagnostic tool: OK CHKDSK returned: 488282111 KB total disk space. 281668248 KB in 265779 files. 150188 KB in 62949 indexes. 0 KB in bad sectors. 571755 KB in use by the system. The log file has occupied 65536 kilobytes. 205891920 KB available on disk. For non-spanish speakers, that means all ok. SMART diagnostic tools (DiskCheckup) report all values normal. temperatures are in the normal range (HWinfo). The event viewer doesn't seem to contain any significant message. ran CCleaner 3, without any noticeable effect. I was thinking about some file number limit (between Visual Studio projects and other applications, there are around 300.000 files on the hard drive), but I couldn't find any. It's possible there is something related with the use of the temporary folders (it's the only explanation I have for why applications fail but Windows doesn't), but I cannot confirm that. Only thing I cannot find out is if chkdsk reporting 65MB for the log is normal. It seems since Vista it always reports this. Any other cleaning/diagnostic tool you might know of? Edit: I ran several other tools since I first published the question: Seagate SeaTools (the HD manufacturer's analysis tool): complete test run OK. Intel Rapid 10.1 (the HD controller manufacturer's troubleshooting tool): the HD's ok. Microsoft Desktop Heap Monitor: Desktop Heap Information Monitor Tool (Version 8.1.2925.0) Copyright (c) Microsoft Corporation. All rights reserved. Session ID: 1 Total Desktop: ( 46464 KB - 11 desktops) WinStation\Desktop Heap Size(KB) Used Rate(%) WinSta0\Winlogon (s1) 128 3.6 WinSta0\Disconnect (s1) 64 3.8 WinSta0\Default (s1) 20480 3.0 msswindowstation\mssrestricteddesk (s0) 1024 0.2 __X78B95_89_IW__A8D9S1_42_ID (s0) 1024 0.2 Service-0x0-3e5$\Default (s0) 1024 0.6 Service-0x0-3e4$\Default (s0) 1024 0.3 Service-0x0-3e7$\Default (s0) 1024 2.1 WinSta0\Winlogon (s0) 128 1.9 WinSta0\Disconnect (s0) 64 3.8 WinSta0\Default (s0) 20480 0.0 All ok, desktop heap usage < 5% Edit 2: I tried totally resetting my account by creating a new one, logging under this new one and delete the first one (local rights and files), then logging back with this deleted account (it is a domain account). No luck. Also, I found out often the error is "not enough storage is available to process this command". Searching on the internet, I found an old troubleshooting tip (setting a registry key to raise the IRP stack limit, whatever it is) which did not change anything.

    Read the article

  • Windows Media Player won't launch on Vista - how to repair or reinstall it?

    - by rpm1200
    My friend asked me to look at her Acer Aspire laptop with Vista Home Premium as it is no longer playing DVDs. I found that Windows Media Player would not launch. I found this thread, which contained a number of suggestions, none of which solved the problem. Here is what I tried: Tried running WMP via desktop shortcut, QuickLaunch bar or going to Program Files\Windows Media Player\wmplayer.exe. In all cases, wmplayer would launch then terminate immediately (verified through the Processes tab in Task Manager). Tried running wmplayer.exe as Administrator. The UAC dialog would come up, I'd approve, then wmplayer would launch and terminate immediately. Uninstalled all non-Microsoft media programs except RealPlayer, iTunes, QuickTime, Acer Arcade (the laptop owner uses all those apps). Tried running Program Files\Windows Media Player\setup_wm.exe as Administrator, it launched but said that a newer version of WMP was already installed. Deleted the "Windows Media" folder located under %userprofile%\appdata\local\Microsoft then tried starting WMP - wmplayer would launch and terminate immediately. Register wmp.dll by typing "regsvr32 wmp.dll" in an Administrator cmd window then tried starting WMP - wmplayer would launch and terminate immediately. Run "SFC /SCANFILE" in an Administrator cmd window - get an error message that it found invalid system files and could not fix them, so look at the log file cbs.log. The log file shows that there are broken files associated with Windows Sidebar (which the user does not use) but none relating to WMP. Log off to safe mode and run "SFC /SCANFILE" in an Administrator cmd window again - same results. Try to download and install XP WMP - the microsoft.com site recognizes the OS as Genuine and allows the download, but when I launch the installer it says the system is not Genuine. Clicking the link directs me back to IE where I can authenticate the system as Genuine. The installer still fails to recognize the system as Genuine. It is a Genuine Vista installation. Try to run this update (KB931621). The installer said it did not apply to the system. Set Windows Media Player as default in Program Access and Defaults. Same results. Tried running "for %a in (%systemroot%\system32\wm*.dll) do regsvr32 /s %a" in an Administrator cmd window - same results. Went to this Knowledge Base article (947541) and ran the Microsoft Fix It. The Fix It ran successfully, but WMP would still launch and terminate immediately. Multiple reboots in the process of doing all of these steps. After all this, looked in the Application and Security logs. No events pertaining to WMP were logged. The computer was preinstalled with Vista Home Premium and I have the Acer backup DVDs which will reimage the drive. I do not have Vista install DVDs. Reimaging the system is not an option. I'd also rather not restore the system to an earlier point unless it's absolutely necessary. What else can I do to repair or reinstall WMP?

    Read the article

  • How to convert a DataSet object into an ObjectContext (Entity Framework) object on the fly?

    - by Marcel
    Hi all, I have an existing SQL Server database, where I store data from large specific log files (often 100 MB and more), one per database. After some analysis, the database is deleted again. From the database, I have created both a Entity Framework Model and a DataSet Model via the Visual Studio designers. The DataSet is only for bulk importing data with SqlBulkCopy, after a quite complicated parsing process. All queries are then done using the Entity Framework Model, whose CreateQuery Method is exposed via an interface like this public IQueryable<TTarget> GetResults<TTarget>() where TTarget : EntityObject, new() { return this.Context.CreateQuery<TTarget>(typeof(TTarget).Name); } Now, sometimes my files are very small and in such a case I would like to omit the import into the database, but just have a an in-memory representation of the data, accessible as Entities. The idea is to create the DataSet, but instead of bulk importing, to directly transfer it into an ObjectContext which is accessible via the interface. Does this make sense? Now here's what I have done for this conversion so far: I traverse all tables in the DataSet, convert the single rows into entities of the corresponding type and add them to instantiated object of my typed Entity context class, like so MyEntities context = new MyEntities(); //create new in-memory context ///.... //get the item in the navigations table MyDataSet.NavigationResultRow dataRow = ds.NavigationResult.First(); //here, a foreach would be necessary in a true-world scenario NavigationResult entity = new NavigationResult { Direction = dataRow.Direction, ///... NavigationResultID = dataRow.NavigationResultID }; //convert to entities context.AddToNavigationResult(entity); //add to entities ///.... A very tedious work, as I would need to create a converter for each of my entity type and iterate over each table in the DataSet I have. Beware, if I ever change my database model.... Also, I have found out, that I can only instantiate MyEntities, if I provide a valid connection string to a SQL Server database. Since I do not want to actually write to my fully fledged database each time, this hinders my intentions. I intend to have only some in-memory proxy database. Can I do simpler? Is there some automated way of doing such a conversion, like generating an ObjectContext out of a DataSet object? P.S: I have seen a few questions about unit testing that seem somewhat related, but not quite exact.

    Read the article

  • IE7 not digesting JSON: "parse error" [resolved]

    - by Kenny Leu
    While trying to GET a JSON, my callback function is NOT firing. $.ajax({ type:"GET", dataType:'json', url: myLocalURL, data: myData, success: function(returned_data){alert('success');} }); The strangest part of this is that my JSON(s) validates on JSONlint this ONLY fails on IE7...it works in Safari, Chrome, and all versions of Firefox, (EDIT: and even in IE8). If I use 'error', then it reports "parseError"...even though it validates! Is there anything that I'm missing? Does IE7 not process certain characters, data structures (my data doesn't have anything non-alphanumeric, but it DOES have nested JSONs)? I have used tons of other AJAX calls that all work (even in IE7), but with the exception of THIS call. An example data return (EDIT: This is a structurally-complete example, meaning it is only missing a few second-tier fields, but follows this exact hierarchy)here is: {"question":{ "question_id":"19", "question_text":"testing", "other_crap":"none" }, "timestamp":{ "response":"answer", "response_text":"the text here" } } I am completely at a loss. Hopefully someone has some insight into what's going on...thank you! EDIT Here's a copy of the SIMPLEST case of dummy data that I'm using...it still doesn't work in IE7. { "question":{ "question_id":"20", "question_text":"testing :", "adverse_party":"none", "juris":"California", "recipients":"Carl Chan" } } EDIT 2 I am starting to doubt that it is a JSON issue...but I have NO idea what else it could be. Here are some other resources that I've found that could be the cause, but they don't seem to work either: http://firelitdesign.blogspot.com/2009/07/jquerys-getjson.html (Django uses Unicode by default, so I don't think this is causing it) Anybody have any other ideas? ANSWER I finally managed to figure it out...mostly via tedious trial-and-error. I want to thank everyone for their suggestions...as soon as I have 15 rep, I'll upvote you, I promise. :) There was basically no way that you guys could have figured it out, because the issue turned out to be a strange bug between IE7 and Django (my research didn't bring up any similar issues). We were basically using Django template language to generate our JSON...and in the midst of this particular JSON, we were using custom template tags: {% load customfilter %} { "question":{ "question_id":"{{question.id}}", "question_text":"{{question.question_text|customfilterhere}}" } } As soon as I deleted anything related to the customfilter, IE7 was able to parse the JSON perfectly! We still don't have a workaround yet, but at least we now know what's causing it. Has anyone seen any similar issues? Once again, thank you everyone for your contributions.

    Read the article

  • Android Packaging Problem: resources.ap_ does not exist

    - by Galip
    I am trying to fix a problem in Eclipse for like 3 hours and I haven't made any progress. Tomorrow is the customer coming to look at my app, and I have no time left. This is really frustrating! This morning when I was coding and I wanted to run my app on my device Eclipse crashed all of a sudden. 'aapt.exe has stopped working' After this Eclipse wasn't starting anymore. It froze at the splash image. I looked on the internet and tried different solutions like going back to Java SE 6 update 20, changing .ini file etc. in the end reinstalling Eclipse did the job. Shortly after that the 'aapt.exe has stopped working' returned. I found a solution by changing my projects target. 1.5, 1.6, 2.2 doesn't matter, as long as it's different than the one before. Now I get the Error generating final archive: java.io.FileNotFoundException: C:\xxx\bin\resources.ap_ does not exist error. I tried clean but that doesn't work. Deleting and automatically regenarting R.java also didn't work. I ran the same code in Netbeans with the Android plugin and there it gives me the 'aapt.exe has stopped working' again :( Please guys, how can I fix this? Edit: I think I may have found the reason. These are the error lines in the console: org.xmlpull.v1.XmlPullParserException: Binary XML file line #3: <bitmap> requires a valid src attribute at android.graphics.drawable.BitmapDrawable.inflate(BitmapDrawable.java:341) at android.graphics.drawable.Drawable.createFromXmlInner(Drawable.java:779) at android.graphics.drawable.Drawable.createFromXml(Drawable.java:720) at com.android.layoutlib.bridge.ResourceHelper.getDrawable(ResourceHelper.java:150) at com.android.layoutlib.bridge.BridgeTypedArray.getDrawable(BridgeTypedArray.java:668) at android.view.View.<init>(View.java:1846) at android.view.View.<init>(View.java:1795) at android.view.ViewGroup.<init>(ViewGroup.java:282) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:619) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:574) at org.eclipse.equinox.launcher.Main.run(Main.java:1407) at org.eclipse.equinox.launcher.Main.main(Main.java:1383) [2011-01-17 16:37:20 - gegevens.xml] Unable to resolve drawable "com.android.layoutlib.utils.ResourceValue@267e33de" in attribute "background" The file it's talking about is 'bg.png'. It's a small png file which I repeat in a .xml file. <?xml version="1.0" encoding="utf-8"?> <bitmap xmlns:android="http://schemas.android.com/apk/res/android" android:src="@drawable/bg" android:tileMode="repeat" /> This file has worked from the first time without any problems. I deleted it from the drawable folder, waited for an error message, and then added it back. The red x next to the foldername got away, but still nothing different...

    Read the article

  • Compare NSArray with NSMutableArray adding delta objects to NSMutableArray

    - by Hooligancat
    I have an NSMutableArray that is populated with objects of strings. For simplicity sake we'll say that the objects are a person and each person object contains information about that person. Thus I would have an NSMutableArray that is populated with person objects: person.firstName person.lastName person.age person.height And so on. The initial source of data comes from a web server and is populated when my application loads and completes it's initialization with the server. Periodically my application polls the server for the latest list of names. Currently I am creating an NSArray of the result set, emptying the NSMutableArray and then re-populating the NSMutableArray with NSArray results before destroying the NSArray object. This seems inefficient to me on a few levels and also presents me with a problem losing table row references which I can work around, but might be creating more work for myself in doing so. The inefficiency seems to be that I should be able to compare the two arrays and end up with a filtered NSArray. I could then add the filtered set to the NSMutableArray. This would mean that I can simply append new data to the NSMutableArray instead of throwing everything out and re-populating. Conversely I would need to do the same filter in reverse to see if there are records that need removing from the NSMutableArray. Is there any method to do this in a more efficient manner? Have I overlooked something in the docs some place that refers to a simpler technique? I have a problem when I empty the NSMutableArray and re-populate in that any referencing tables lose their selected row state. I can track it and re-select it, but my theory is that using some form of compare and adding objects and removing objects instead of dealing with the whole array in one block might mean I keep my row reference (assuming the item isn't deleted of course). Any suggestions or help much appreciated. Update Would it be just as fast to do a fast enumeration over each comparing each line item as I go? It seems like an expensive operation, but with the last fast enumeration code it might be pretty efficient...

    Read the article

  • Using MVC2 to update an Entity Framework v4 object with foreign keys fails

    - by jbjon
    With the following simple relational database structure: An Order has one or more OrderItems, and each OrderItem has one OrderItemStatus. Entity Framework v4 is used to communicate with the database and entities have been generated from this schema. The Entities connection happens to be called EnumTestEntities in the example. The trimmed down version of the Order Repository class looks like this: public class OrderRepository { private EnumTestEntities entities = new EnumTestEntities(); // Query Methods public Order Get(int id) { return entities.Orders.SingleOrDefault(d => d.OrderID == id); } // Persistence public void Save() { entities.SaveChanges(); } } An MVC2 app uses Entity Framework models to drive the views. I'm using the EditorFor feature of MVC2 to drive the Edit view. When it comes to POSTing back any changes to the model, the following code is called: [HttpPost] public ActionResult Edit(int id, FormCollection formValues) { // Get the current Order out of the database by ID Order order = orderRepository.Get(id); var orderItems = order.OrderItems; try { // Update the Order from the values posted from the View UpdateModel(order, ""); // Without the ValueProvider suffix it does not attempt to update the order items UpdateModel(order.OrderItems, "OrderItems.OrderItems"); // All the Save() does is call SaveChanges() on the database context orderRepository.Save(); return RedirectToAction("Details", new { id = order.OrderID }); } catch (Exception e) { return View(order); // Inserted while debugging } } The second call to UpdateModel has a ValueProvider suffix which matches the auto-generated HTML input name prefixes that MVC2 has generated for the foreign key collection of OrderItems within the View. The call to SaveChanges() on the database context after updating the OrderItems collection of an Order using UpdateModel generates the following exception: "The operation failed: The relationship could not be changed because one or more of the foreign-key properties is non-nullable. When a change is made to a relationship, the related foreign-key property is set to a null value. If the foreign-key does not support null values, a new relationship must be defined, the foreign-key property must be assigned another non-null value, or the unrelated object must be deleted." When debugging through this code, I can still see that the EntityKeys are not null and seem to be the same value as they should be. This still happens when you are not changing any of the extracted Order details from the database. Also the entity connection to the database doesn't change between the act of Getting and the SaveChanges so it doesn't appear to be a Context issue either. Any ideas what might be causing this problem? I know EF4 has done work on foreign key properties but can anyone shed any light on how to use EF4 and MVC2 to make things easy to update; rather than having to populate each property manually. I had hoped the simplicity of EditorFor and DisplayFor would also extend to Controllers updating data. Thanks

    Read the article

  • Android "Trying to use recycled bitmap" error?

    - by Mike
    Hi all, I am running into a problem with bitmaps on an Android application I am working on. What is suppose to happen is that the application downloads images from a website, saves them to the device, loads them into memory as bitmaps into an arraylist, and displays them to the user. This all works fine when the application is first started. However, I have added a refresh option for the user where the images are deleted, and the process outlined above starts all over. My problem: By using the refresh option the old images were still in memory and I would quickly get OutOfMemoryErrors. Thus, if the images are being refreshed, I had it run through the arraylist and recycle the old images. However, when the application goes to load the new images into the arraylist, it crashes with a "Trying to use recycled bitmap" error. As far as I understand it, recycling a bitmap destroys the bitmap and frees up its memory for other objects. If I want to use the bitmap again, it has to be reinitialized. I believe that I am doing this when the new files are loaded into the arraylist, but something is still wrong. Any help is greatly appreciated as this is very frustrating. The problem code is below. Thank you! public void fillUI(final int refresh) { // Recycle the images to avoid memory leaks if(refresh==1) { for(int x=0; x<images.size(); x++) images.get(x).recycle(); images.clear(); selImage=-1; // Reset the selected image variable } final ProgressDialog progressDialog = ProgressDialog.show(this, null, this.getString(R.string.loadingImages)); // Create the array with the image bitmaps in it new Thread(new Runnable() { public void run() { Looper.prepare(); File[] fileList = new File("/data/data/[package name]/files/").listFiles(); if(fileList!=null) { for(int x=0; x<fileList.length; x++) { try { images.add(BitmapFactory.decodeFile("/data/data/[package name]/files/" + fileList[x].getName())); } catch (OutOfMemoryError ome) { Log.i(LOG_FILE, "out of memory again :("); } } Collections.reverse(images); } fillUiHandler.sendEmptyMessage(0); } }).start(); fillUiHandler = new Handler() { public void handleMessage(Message msg) { progressDialog.dismiss(); } }; }

    Read the article

  • Resgen al.exe generated resources do not work within .net library

    - by Raj G
    Hi, I am currently working on a library in .Net and I planned to make the strings that are used within the library into culture specific resource files. I made Resources.resx, Resources.en-US.resx and Resources.ja-JP.resx file. I also deleted the Resources.designer.cs file autogenerated by visual studio 2008. I am loading Resources through my custom ResourceManager object [using GetString method]. The problem that I am facing is that when I compile the library within visual studio and set the culture from the calling application, everything is working fine. But if I manually go to the directory and change a string for a culture and regenerate the satellite assembly with resgen and al.exe, the string displayed, falls back to the invariant culture. I have attached the ildasm view of both the dlls en-US generated from within visual studio //Metadata version: v2.0.50727 .assembly extern mscorlib { .publickeytoken = (B7 7A 5C 56 19 34 E0 89 ) // .z\V.4.. .hash = (71 05 4D 54 C4 8D C2 90 7D 8B CF 57 2E B5 98 22 // q.MT....}..W..." F5 5B 2E 06 ) // .[.. .ver 2:0:0:0 } .assembly EmailEngine.resources { .custom instance void [mscorlib]System.Reflection.AssemblyTitleAttribute::.ctor(string) = ( 01 00 0B 45 6D 61 69 6C 45 6E 67 69 6E 65 00 00 ) // ...EmailEngine.. .custom instance void [mscorlib]System.Reflection.AssemblyDescriptionAttribute::.ctor(string) = ( 01 00 FF 00 00 ) .custom instance void [mscorlib]System.Reflection.AssemblyCompanyAttribute::.ctor(string) = ( 01 00 FF 00 00 ) .custom instance void [mscorlib]System.Reflection.AssemblyProductAttribute::.ctor(string) = ( 01 00 0B 45 6D 61 69 6C 45 6E 67 69 6E 65 00 00 ) // ...EmailEngine.. .custom instance void [mscorlib]System.Reflection.AssemblyCopyrightAttribute::.ctor(string) = ( 01 00 12 43 6F 70 79 72 69 67 68 74 20 C2 A9 20 // ...Copyright .. 20 32 30 30 38 00 00 ) // 2008.. .custom instance void [mscorlib]System.Reflection.AssemblyTrademarkAttribute::.ctor(string) = ( 01 00 FF 00 00 ) .custom instance void [mscorlib]System.Reflection.AssemblyFileVersionAttribute::.ctor(string) = ( 01 00 07 31 2E 30 2E 30 2E 30 00 00 ) // ...1.0.0.0.. .hash algorithm 0x00008004 .ver 1:0:0:0 .locale = (65 00 6E 00 2D 00 55 00 53 00 00 00 ) // e.n.-.U.S... } .mresource public 'EmailEngine.Properties.Resources.en-US.resources' { // Offset: 0x00000000 Length: 0x00000111 } .module EmailEngine.resources.dll // MVID: {D030D620-4E59-46F4-94F4-5EA0F9554E67} .imagebase 0x00400000 .file alignment 0x00000200 .stackreserve 0x00100000 .subsystem 0x0003 // WINDOWS_CUI .corflags 0x00000001 // ILONLY // Image base: 0x008B0000 ja-JP generated by me using resgen and al.exe // Metadata version: v2.0.50727 .assembly EmailEngine.resources { .hash algorithm 0x00008004 .ver 0:0:0:0 .locale = (6A 00 61 00 00 00 ) // j.a... } .mresource public 'EmailEngine.Properties.Resources.ja-JP.resources' { // Offset: 0x00000000 Length: 0x0000012F } .module EmailEngine.resources.dll // MVID: {0F470BCD-C36D-4B9F-A8ED-205A0E5A9F6F} .imagebase 0x00400000 .file alignment 0x00000200 .stackreserve 0x00100000 .subsystem 0x0003 // WINDOWS_CUI .corflags 0x00000001 // ILONLY // Image base: 0x007F0000 Can anyone help me as to why these two files are different and what is going on here? Why would the same Japanese resource file work when generated from within visual studio and not when generated using tools. TIA Raj

    Read the article

  • Strange crashes when using MPMoviePlayerController in iPad and iPad simulator with iOS 4.2.

    - by Dave L
    I'm getting crashes when trying to play a movie using MPMoviePlayerController on the iPad and in the iPad simulator using iOS 4.2. I am building using xcode 3.2.5 and the 4.2 SDK. When running on an iPad using iOS 3.2 or in the iPad simulator 3.2, the same code works fine, but on an iPad running 4.2 or in the iPad simulator 4.2, it crashes. The crash is due to an exception for calling an unknown selector down somewhere deep in the cocoa libraries, something tries to call a selector called "setHitRectInsets:" on a UIButton object. Also, the crash happens after control has returned to the main event loop. Anybody have any idea what might be causing this? Here is the code (roughly, some extraneous stuff deleted that doesn't seem to have any effect) that is crashing: NSString *fullMovieFilename = [[NSBundle mainBundle] pathForResource:@"movie" ofType:@"m4v"]; NSURL *movieURL = [NSURL fileURLWithPath:fullMovieFilename]; moviePlayer = [[MPMoviePlayerController alloc] initWithContentURL:movieURL]; moviePlayer.controlStyle = MPMovieControlStyleNone; // crash happens regardless of the setting here [moviePlayer.view setFrame:self.view.bounds]; [self.view addSubview:moviePlayer.view]; [moviePlayer setFullscreen:YES animated:YES]; // crash happens whether i use fullscreen or not [moviePlayer play]; The same code also works just find if I build using Xcode 3.2.3 and the 4.0 SDK. The crash stack trace looks like this: 2011-03-04 23:08:22.017 MyApp[31610:207] -[UIButton setHitRectInsets:]: unrecognized selector sent to instance 0x990bc60 2011-03-04 23:08:22.020 MyApp[31610:207] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[UIButton setHitRectInsets:]: unrecognized selector sent to instance 0x990bc60' *** Call stack at first throw: ( 0 CoreFoundation 0x0151abe9 __exceptionPreprocess + 185 1 libobjc.A.dylib 0x0166f5c2 objc_exception_throw + 47 2 CoreFoundation 0x0151c6fb -[NSObject(NSObject) doesNotRecognizeSelector:] + 187 3 CoreFoundation 0x0148c366 ___forwarding___ + 966 4 CoreFoundation 0x0148bf22 _CF_forwarding_prep_0 + 50 5 MediaPlayer 0x011b5bdb -[MPTransportControls createButtonForPart:] + 380 6 MediaPlayer 0x011b65d5 -[MPTransportControls _updateAdditions:removals:forPart:] + 110 7 MediaPlayer 0x011b4d78 -[MPTransportControls _reloadViewWithAnimation:] + 299 8 MediaPlayer 0x011b621a -[MPTransportControls setVisibleParts:] + 49 9 MediaPlayer 0x011b6e4a -[MPTransportControls initWithFrame:] + 173 10 MediaPlayer 0x011f9188 -[MPInlineTransportControls initWithFrame:] + 78 11 MediaPlayer 0x011f193a -[MPInlineVideoOverlay layoutSubviews] + 158 12 QuartzCore 0x00d6e451 -[CALayer layoutSublayers] + 181 13 QuartzCore 0x00d6e17c CALayerLayoutIfNeeded + 220 14 QuartzCore 0x00d6e088 -[CALayer layoutIfNeeded] + 111 15 MediaPlayer 0x011f130c -[MPInlineVideoOverlay setAllowsWirelessPlayback:] + 46 16 MediaPlayer 0x011f5b0d -[MPInlineVideoViewController _overlayView] + 526 17 MediaPlayer 0x011f6359 -[MPInlineVideoViewController _layoutForItemTypeAvailable] + 1350 18 Foundation 0x000e56c1 _nsnote_callback + 145 19 CoreFoundation 0x014f2f99 __CFXNotificationPost_old + 745 20 CoreFoundation 0x0147233a _CFXNotificationPostNotification + 186 21 Foundation 0x000db266 -[NSNotificationCenter postNotificationName:object:userInfo:] + 134 22 Foundation 0x000e75a9 -[NSNotificationCenter postNotificationName:object:] + 56 23 MediaPlayer 0x01184dd6 -[MPAVItem _updateForNaturalSizeChange] + 278 24 MediaPlayer 0x011818e4 __-[MPAVItem blockForDirectAVControllerNotificationReferencingItem:]_block_invoke_1 + 82 25 MediaPlayer 0x011899ba -[MPAVController _postMPAVControllerSizeDidChangeNotificationWithItem:] + 138 26 Foundation 0x000e56c1 _nsnote_callback + 145 27 CoreFoundation 0x014f2f99 __CFXNotificationPost_old + 745 28 CoreFoundation 0x0147233a _CFXNotificationPostNotification + 186 29 Foundation 0x000db266 -[NSNotificationCenter postNotificationName:object:userInfo:] + 134 30 Celestial 0x052a35b2 -[NSObject(NSObject_AVShared) postNotificationWithDescription:] + 176 31 Celestial 0x052ab214 -[AVController fpItemNotification:sender:] + 735 32 Celestial 0x052b5305 -[AVPlaybackItem fpItemNotificationInfo:] + 640 33 Celestial 0x052a3d5c -[AVObjectRegistry safeInvokeWithDescription:] + 211 34 Foundation 0x000fa9a6 __NSThreadPerformPerform + 251 35 CoreFoundation 0x014fc01f __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 15 36 CoreFoundation 0x0145a28b __CFRunLoopDoSources0 + 571 37 CoreFoundation 0x01459786 __CFRunLoopRun + 470 38 CoreFoundation 0x01459240 CFRunLoopRunSpecific + 208 39 CoreFoundation 0x01459161 CFRunLoopRunInMode + 97 40 GraphicsServices 0x01e4f268 GSEventRunModal + 217 41 GraphicsServices 0x01e4f32d GSEventRun + 115 42 UIKit 0x0038a42e UIApplicationMain + 1160 43 MyApp 0x0000837a main + 102 44 MpApp 0x00002009 start + 53 ) terminate called after throwing an instance of 'NSException' Been fighting with this for several days, and have searched the internet quite a bit for anybody having similar problems, all with no luck. Any suggestions would be greatly appreciated.

    Read the article

  • Validate a string in a table in SQL Server - CLR function or T-SQL

    - by Ashish Gupta
    I need to check If a column value (string) in SQL server table starts with a small letter and can only contain '_', '-', numbers and alphabets. I know I can use a SQL server CLR function for that. However, I am trying to implement that validation using a scalar UDF and could make very little here...I can use 'NOT LIKE', but I am not sure how to make sure I validate the string irrespective of the order of characters or in other words write a pattern in SQL for this. Am I better off using a SQL CLR function? Any help will be appreciated.. Thanks in advance Thank you everyone for their comments. This morning, I chose to go CLR function way. For the purpose of what I was trying to achieve, I created one CLR function which does the validation of an input string and have that called from a SQL UDF and It works well. Just to measure the performance of t-SQL UDF using SQL CLR function vs t- SQL UDF, I created a SQL CLR function which will just check if the input string contains only small letters, it should return true else false and have that called from a UDF (IsLowerCaseCLR). After that I also created a regular t-SQL UDF(IsLowerCaseTSQL) which does the same thing using the 'NOT LIKE'. Then I created a table (Person) with columns Name(varchar) and IsValid(bit) columns and populate that with names to test. Data :- 1000 records with 'Ashish' as value for Name column 1000 records with 'ashish' as value for Name column then I ran the following :- UPDATE Person Set IsValid=1 WHERE dbo.IsLowerCaseTSQL (Name) Above updated 1000 records (with Isvalid=1) and took less than a second. I deleted all the data in the table and repopulated the same with same data. Then updated the same table using Sql CLR UDF (with Isvalid=1) and this took 3 seconds! If update happens for 5000 records, regular UDF takes 0 seconds compared to CLR UDF which takes 16 seconds! I am very less knowledgeable on t-SQL regular expression or I could have tested my actual more complex validation criteria. But I just wanted to know, even I could have written that, would that have been faster than the SQL CLR function considering the example above. Are we using SQL CLR because we can implement we can implement lot richer logic which would have been difficult otherwise If we write in regular SQL. Sorry for this long post. I just want to know from the experts. Please feel free to ask if you could not understand anything here. Thank you again for your time.

    Read the article

  • Have Button re-appear immediately after clicking button in ListView row

    - by Soeren
    I have 4 buttons on a page. Each button opens a modal window and let’s the user input data in a form. When the user hits the save button in the modal, a ListView appears on the page with the submitted data. The button the user clicked to open the modal window is set to visible=false, so it’s gone when the row is added to the ListView. Now there are 3 buttons and the same goes for those; when the user hits a button, a modal appears, and when the modal form is submitted, the button disappears and a row is added to the ListView. In the ListView row, there is a delete button. When this button is clicked, the row is deleted and the button that was initially clicked to add this row (and open the modal), SHOULD reappear, but it doesn’t. The row disappears, but I have to refresh the page before the button comes back. There is a ScriptManager on the masterpage, so I guess this is an AJAX partial refresh issue. I tried adding different events, but I can’t find the one that fires at the right time. I use an ObjectDataSource to fill the ListView, and the data comes from a database, wrapped in a business object. This code loads a business object in a List< and checks if the user inserted an item of a specific type. If he did, the button he used to open the modal is hidden. This works fine (maybe not the most elegant) _goals = GoalManager.GetGoalsByUser(UserID); if (_goals != null) { foreach (Goal _goalinlist in _goals) { if (_goalinlist.GoalType == 1) { Button1.Visible = false; goalid1 = true; } if (_goalinlist.GoalType == 2) { Button2.Visible = false; goalid2 = true; } if (_goalinlist.GoalType == 3) { Button3.Visible = false; goalid3 = true; } if (_goalinlist.GoalType == 4) { Button4.Visible = false; goalid4 = true; } } } As you can see, I tried setting a boolean, and then check it when the page is re-loaded. But the problem (I guess) is that the whole page isn't refreshed when the delete button is clicked in the ListView. This is the delete button in the ListView: <asp:ImageButton ID="ImageButton2" runat="server" CommandName="Delete" CausesValidation="false" ToolTip="Delete" CommandArgument='<%#Eval("GoalID")%>' ImageUrl="delete.gif" OnClientClick="return confirm('Delete this post?');" CssClass="button"/> I guess the question is, how do I make the button re-appear right after the ListView button is clicked?

    Read the article

  • Deletes not cascading for self-referencing entities

    - by jwaddell
    I have the following (simplified) Hibernate entities: @Entity @Table(name = "package") public class Package { protected Content content; @OneToOne(cascade = {javax.persistence.CascadeType.ALL}) @JoinColumn(name = "content_id") @Fetch(value = FetchMode.JOIN) public Content getContent() { return content; } public void setContent(Content content) { this.content = content; } } @Entity @Table(name = "content") public class Content { private Set<Content> subContents = new HashSet<Content>(); private ArchivalInformationPackage parentPackage; @OneToMany(fetch = FetchType.EAGER) @JoinTable(name = "subcontents", joinColumns = {@JoinColumn(name = "content_id")}, inverseJoinColumns = {@JoinColumn(name = "elt")}) @Cascade(value = {org.hibernate.annotations.CascadeType.DELETE, org.hibernate.annotations.CascadeType.REPLICATE}) @Fetch(value = FetchMode.SUBSELECT) public Set<Content> getSubContents() { return subContents; } public void setSubContents(Set<Content> subContents) { this.subContents = subContents; } @ManyToOne(cascade = {CascadeType.ALL}) @JoinColumn(name = "parent_package_id") public Package getParentPackage() { return parentPackage; } public void setParentPackage(Package parentPackage) { this.parentPackage = parentPackage; } } So there is one Package, which has one "top" Content. The top Content links back to the Package, with cascade set to ALL. The top Content may have many "sub" Contents, and each sub-Content may have many sub-Contents of its own. Each sub-Content has a parent Package, which may or may not be the same Package as the top Content (ie a many-to-one relationship for Content to Package). The relationships are required to be ManyToOne (Package to Content) and ManyToMany (Content to sub-Contents) but for the case I am currently testing each sub-Content only relates to one Package or Content. The problem is that when I delete a Package and flush the session, I get a Hibernate error stating that I'm violating a foreign key constraint on table subcontents, with a particular content_id still referenced from table subcontents. I've tried specifically (recursively) deleting the Contents before deleting the Package but I get the same error. Is there a reason why this entity tree is not being deleted properly? EDIT: After reading answers/comments I realised that a Content cannot have multiple Packages, and a sub-Content cannot have multiple parent-Contents, so I have modified the annotations from ManyToOne and ManyToMany to OneToOne and OneToMany. Unfortunately that did not fix the problem. I have also added the bi-directional link from Content back to the parent Package which I left out of the simplified code.

    Read the article

  • Tactics for using PHP in a high-load site

    - by Ross
    Before you answer this I have never developed anything popular enough to attain high server loads. Treat me as (sigh) an alien that has just landed on the planet, albeit one that knows PHP and a few optimisation techniques. I'm developing a tool in PHP that could attain quite a lot of users, if it works out right. However while I'm fully capable of developing the program I'm pretty much clueless when it comes to making something that can deal with huge traffic. So here's a few questions on it (feel free to turn this question into a resource thread as well). Databases At the moment I plan to use the MySQLi features in PHP5. However how should I setup the databases in relation to users and content? Do I actually need multiple databases? At the moment everything's jumbled into one database - although I've been considering spreading user data to one, actual content to another and finally core site content (template masters etc.) to another. My reasoning behind this is that sending queries to different databases will ease up the load on them as one database = 3 load sources. Also would this still be effective if they were all on the same server? Caching I have a template system that is used to build the pages and swap out variables. Master templates are stored in the database and each time a template is called it's cached copy (a html document) is called. At the moment I have two types of variable in these templates - a static var and a dynamic var. Static vars are usually things like page names, the name of the site - things that don't change often; dynamic vars are things that change on each page load. My question on this: Say I have comments on different articles. Which is a better solution: store the simple comment template and render comments (from a DB call) each time the page is loaded or store a cached copy of the comments page as a html page - each time a comment is added/edited/deleted the page is recached. Finally Does anyone have any tips/pointers for running a high load site on PHP. I'm pretty sure it's a workable language to use - Facebook and Yahoo! give it great precedence - but are there any experiences I should watch out for? Thanks, Ross

    Read the article

  • Doubt about adopting CI (Hudson) into an existing automated Build Process (phing, svn)

    - by maraspin
    OUR CURRENT BUILD PROCESS We're a small team of developers (2 to 4 people depending on project) who currently use Phing to deploy code to a staging environment, before going live. We keep our code in a SVN repo, where the trunk holds current active development and, at certain times, we do make branches that we test and then (if successful), tag and export to the staging env. If everything goes well there too, we finally deploy'em in production servers. Actions are highly automated, but always triggered by human intervention. THE DOUBT We'd now like to introduce Continuous Integration (with Hudson) in the process; unfortunately we have a few doubts about activity syncing, since we're afraid that CI could somewhat interfere with our build process and cause certain problems. Considering that an automated CI cycle has a certain frequency of automatically executed actions, we in fact only see 2 possible cases for "integration", each with its own problems: Case A: each CI cycle produces a new branch with its own name; we do use such a name to manually (through phing as it happens now) export the code from the SVN to the staging env. The problem I see here is that (unless specific countermeasures are taken) the number of branches we have can grow out of control (let's suppose we commit often, so that we have a fresh new build/branch every N minutes). Case B: each CI cycle creates a new branch named 'current', for instance, which is tagged with a unique name only when we manually decide to export it to staging; the current branch, at any case is then deleted, as soon as the next CI cycle starts up. The problem we see here is that a new cycle could kick in while someone is tagging/exporting the 'current' branch to staging thus creating an inconsistent build (but maybe here I'm just too pessimist, since I confess I don't know whether SVN offers some built-in protection against this). With all this being said, I was wondering if anyone with similar experiences could be so kind to give us some hints on the subject, since none of the approaches depicted above looks completely satisfing to us. Is there something important we just completely left off in the overall picture? Thanks for your attention &, in advance, for your help!

    Read the article

  • C++ How to deep copy a struct with unknown datatype?

    - by Ewald Peters
    hi, i have a "data provider" which stores its output in a struct of a certain type, for instance struct DATA_TYPE1{ std::string data_string; }; then this struct has to be casted into a general datatype, i thought about void * or char *, because the "intermediate" object that copies and stores it in its binary tree should be able to store many different types of such struct data. struct BINARY_TREE_ENTRY{ void * DATA; struct BINARY_TREE_ENTRY * next; }; this void * is then later taken by another object that casts the void * back into the (struct DATA_TYPE1 *) to get the original data. so the sender and the receiver know about the datatype DATA_TYPE1 but not the copying object inbetween. but how can the intermidiate object deep copy the contents of the different structs, when it doesn't know the datatype, only void * and it has no method to copy the real contents; dynamic_cast doesn't work for void *; the "intermediate" object should do something like: void store_data(void * CASTED_DATA_STRUCT){ void * DATA_COPY = create_a_deepcopy_of(CASTED_DATA_STRUCT); push_into_bintree(DATA_COPY); } a simple solution would be that the sending object doesn't delete the sent data struct, til the receiving object got it, but the sending objects are dynamically created and deleted, before the receiver got the data from the intermediate object, for asynchronous communication, therefore i want to copy it. instead of converting it to void * i also tried converting to a superclass pointer of which the intermediate copying object knows about, and which is inherited by all the different datatypes of the structs: struct DATA_BASE_OBJECT{ public: DATA_BASE_OBJECT(){} DATA_BASE_OBJECT(DATA_BASE_OBJECT * old_ptr){ std::cout << "this should be automatically overridden!" << std::endl; } virtual ~DATA_BASE_OBJECT(){} }; struct DATA_TYPE1 : public DATA_BASE_OBJECT { public: string str; DATA_TYPE1(){} ~DATA_TYPE1(){} DATA_TYPE1(DATA_TYPE1 * old_ptr){ str = old_ptr->str; } }; and the corresponding binary tree entry would then be: struct BINARY_TREE_ENTRY{ struct DATA_BASE_OBJECT * DATA; struct BINARY_TREE_ENTRY * next; }; and to then copy the unknown datatype, i tried in the class that just gets the unknown datatype as a struct DATA_BASE_OBJECT * (before it was the void *): void * copy_data(DATA_BASE_OBJECT * data_that_i_get_in_the_sub_struct){ struct DATA_BASE_OBJECT * copy_sub = new DATA_BASE_OBJECT(data_that_i_get_in_the_sub_struct); push_into_bintree(copy_sub); } i then added a copy constructor to the DATA_BASE_OBJECT, but if the struct DATA_TYPE1 is first casted to a DATA_BASE_OBJECT and then copied, the included sub object DATA_TYPE1 is not also copied. i then thought what about finding out the size of the actual object to copy and then just memcopy it, but the bytes are not stored in one row and how do i find out the real size in memory of the struct DATA_TYPE1 which holds a std::string? Which other c++ methods are available to deepcopy an unknown datatype (and to maybe get the datatype information somehow else during runtime) thanks Ewald

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >