Search Results

Search found 12437 results on 498 pages for 'normal mapping'.

Page 404/498 | < Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >

  • Win7 taskbar freezes on startup for about 1-2 mins

    - by Mike
    Running Win7 64-bit for about 4 months now. Never had this problem, didn't install anything new recently. When I boot up I can't do anything in the taskbar, it's frozen for about 1-2 minutes then everything is normal. I can right click on my desktop and move my mouse around. This randomly just started happening a couple days ago after a reboot. I have a 3.2ghz quad, SSD, 4 gig ram, etc. and it usually starts up quickly. After some troubleshooting (including running antivirus and Anti-Malware), it doesn't appear to be software related, but appears to be services related. I can boot up in safe mode and safe mode with networking just fine. I can also boot up normally with all my regular software loading at startup, BUT with all my services turned off. Now the odd part. When I run msconfig to disable all the services at startup and go through ticking them on 5-10 at a time or so and booting up it seems to be somewhat random. Ticking everything on from "Application Experience" halfway down to about "Quality Windows Audio Video Experience" and I can boot without the 1-2 min. freeze. Then I start ticking the stuff below that from a couple of Remote Accesses to Smart Card and Task Scheduler, etc. But the weird part is sometimes it will freeze sometimes it won't. I can't narrow it down. Then if it freezes, I'll boot up in safe mode and turn the ones I just turned on back off and I'll reboot normally but it will freeze again. Which makes no sense because that configuration just worked without freezing just before. I got frustrated enough that I backed up and wiped my hard drive (formatted and everything) and reinstalled Win7 but when I booted up, the freeze happened again. Any ideas? Thanks in advance.

    Read the article

  • Configuring iptables rules for HAProxy and others

    - by MLister
    I have the following relevant settings for HAProxy: defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 500 contimeout 5s clitimeout 15s srvtimeout 15s frontend public bind *:80 option http-server-close option http-pretend-keepalive option forwardfor # ACLs ... I have three backends (including a Nginx server) configured in HAProxy, all listening on different ports of 127.0.0.1. And my iptables config is this: *filter # Allows all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT -i lo -d 127.0.0.0/8 -j REJECT # Accepts all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allows all outbound traffic # You can modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allows HTTP and HTTPS connections from anywhere (the normal ports for websites) -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT # Allows SSH connections # # THE -dport NUMBER IS THE SAME ONE YOU SET UP IN THE SSHD_CONFIG FILE # -A INPUT -p tcp -m state --state NEW --dport 22 -j ACCEPT # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # log iptables denied calls -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT My questions are: Would the above iptables config work with the settings/options in my HAProxy config? I am also runnning a postgres and a redis server on the same machine, what settings do I need to adjust for these two to enable them work with iptables?

    Read the article

  • buagent process has been consuming 100% cpu for two days

    - by Maysam
    The buagent process has been using 100% of cpu since two days ago. I want to terminate this process but I don't know if it's something dangerous or not (I am not much advanced in working with linux, indeed I am very beginner). The only thing that I know is that this process is probably restoring some files. But I think it is not normal for that to take more than two days. Now, do you think it would be OK if I kill this process? What command could I use to do that? I appreciate any help :) p.s. We are hosting a few web sites there. This server is also our Name Server and Mail Server as well. A couple of months a go we had a problem with the server which made us to take a full-backup of all files and then reinstall linux. Yesterday, I selected one of the directories on the backup server and restored that directory to a tmp directory on our linux server. After that, I couldn't restore any other directory because every time I want to do that, it says that there is another restore job running and I have to wait for that. When I use the "top" command I can see that the buagent process is consuming 100% of cpu. So I guess that is the problem. I don't know why it has been taking too long to execute.

    Read the article

  • Command prompt cannot find PATH variable

    - by davidXYZ
    Sometimes, my command prompt cannot find the PATH variable. I have this occasional problem at work where when I open command-prompt and run commands like ipconfig or subst, I get an error saying something like 'ipconfig' is not recognized as an internal or external command. When I try this echo %path%, it prints out %path% instead of the PATH value. If I look at my Environment Variables window, the PATH is defined right there but I don't know why CMD can't find it. At this point, I understand why the other commands were not being recognized since their paths are in PATH variable. However, I cannot understand why the PATH variable is not being found. If I restart the computer, everything is back to normal. In a few days, I might have the same experience again. I tried using this answer. It suggested changing a registry value but mine already had the value that was suggested yet it wasn't working. (The restart step at the end would have solved it as usual but that's not the point.) Any suggestions regarding why the PATH variable may become invisible every now and then and how I can prevent it from happening again?

    Read the article

  • please take a look at my server's ram usage

    - by user66779
    Hi, i am a noob with servers. I have a centos5.5 vps with 512mb ram. My goal is to have it host just one magento store. I've installed Magento on the server without any control panel, by just installing lamp myself and whatever php extensions were necessary to get Magento to install. As soon as i visit my magento store, suddenly the ram on the vps is almost completely used, with only about 100mb left. Please see this screenshot of htop taken after just myself visited the website. http://img714.imageshack.us/img714/1944/screenouv.png As you can see there's only around 100mb left. Is that normal? I'm wondering if i might have done something stupid with the server that makes it very resource hungry. I installed apache from the centos base repo, php version 5.3 from the ius repository and mysql 5.1 also from ius repo. I haven't changed any of the default config files for any of these except to make memory_minimum 256 in php.ini. Is there anything i can do to make more ram free? I'm clueless but i see each Apache daemon is using 8% of available ram, and AFAIK each visitor needs one Apache daemon. So i would run out of ram with just a handful of visitors. Thanks for your advice.

    Read the article

  • Disk (EXT4) suddenly empty without any sign of why

    - by Ohnomydisk
    I have a Ubuntu 10.04 server with several disks in it. The disks are setup with a union filesystem, which presents them all as one logical /home. A few days ago, one of the disks appears to have suddenly 'become empty', for lack of better explanation. The amount of data on the /home mount almost halved within minutes - the disk appears to have had just over 400 GB of data prior to 'becoming empty'. I have absolutely no idea what happened. I was not using the server at the other time, but there are half a dozen other users who may have been (without root access and without the ability to hose a whole disk). I've ran SMART tests on the disk and it comes back clean. The filesystem checks fine (it has 12 GB used now, as some user software continued downloading after the incident). All I know is that around around midnight on October 19, the disk usage changed dramatically: The data points are every 15 minutes, and the full loss occured between captures: 2012-10-18 23:58:03.399647 - has 953.97/2059.07 GB [46.33 percent] 2012-10-19 00:13:15.909010 - has 515.18/2059.07 GB [25.02 percent] Other than that, I have not much to go off :-( I know that: There's nothing interesting in log files at that time Nobody appeared to be logged in via SSH at the time it occured (most users do not even use SSH) The server was online through whatever occured (3 months uptime) None of the other disks were affected and everything else on the server looks completely normal I have tried using "extundelete" on the disk and it didn't really find anything (some temporary files, but they looked new anyway) I am completely at a loss to what could have caused this. I was initially thinking maybe root escalation exploit, but even if someone did maliciously "rm" the disk contents, it would take more than 15 minutes for 400 GB?

    Read the article

  • Some Portions of Computer Running Slow (Specifically Graphics)

    - by Mike Gates
    I noticed that a few things are running slow today on my Windows 7 laptop. Specifically, they are: Opening and closing windows takes several seconds for the animation to complete. Windows media player opens fine, but the movies are very laggy MMORPG's, such as RuneScape, are extremely laggy When waking my computer from sleep mode, after entering my password, my desktop takes about 3 seconds to fade in Other than those, everything runs at a normal speed. Things I've done that maybe contributed to this problem: Changed the graphics processor (by plugging in/unplugging the charger) [however, no matter how I change the graphics, I'm still getting this lagginess] Installed AdBlock, a Firefox addon [I recently removed it, and I'm still experiencing this problem] Went into Advanced System Settings, Clicked Settings, and unchecked a few visual things (such as the animation for opening and closing windows) [sure, this got rid of the opening/closing windows lag, but I like that little animation - plus that leaves all the other lag problems I'm experiencing] So, does anyone have any ideas/fixes? If so, please respond. Thank you. Some Other Information: I'm on a HP Pavillion dv7 laptop, 4285 Entertainment PC, with: intel CORE i5 inside, ATI Mobility Radeon Premium Graphics, Microsoft DirectX11 Opening and closing of windows: Defined as opening a program (i.e. Firefox) or closing it by hitting the X in the upper-right hand corner. Lately, the animation for opening and closing windows (which is simply either growing from the icon from the taskbar to fill the screen, or shrinking from the screen down towards the icon on the toolbar.) This problem also occurs for minimizing/maximizing windows. Very laggy movies: defined as .avi movie files saved to My Documents which skips several frames per second and seemingly slows down the movie as a whole Extremely laggy games: I tried RuneScape today, and movement in the game was at least 10x slower than it ever has been, even when playing on the lowest detail/graphics Desktop taking 3 seconds to fade in after sleep: in this scenario, I had no other programs running visibly. The computer generally fades to black from the password screen to the desktop in about 1 second, normally. However, it is now taking 3 or more seconds.

    Read the article

  • Performance of external USB disk with ESXi5

    - by PeterMmm
    I have a new HP DL120 G7 server with ESXi5. One VM is a Win2003 instalation and I have an external USB2.0 drive attached by USB Controller and USB Device. I copy a 4GB file from external USB to server disk. In the VM that takes up to 10 minutes. On a native Win2003 that takes aprox. 3 minutes. I have no explaination for that diference: In any case the bottleneck is the USB connection, much slower than the disks (SAS, RAID1). If the USB connection on the VM would be USB1.1 and not USB2.0 it would take much more time. (The disk performance between server partitions on the VM is correct. - see update) Could be that my native box is extremely fast and the VM is the normal case. ??? Update I try with passtrough and a first run copy the same data in aprox. 7 minutes. Still 2 times slower than the native connection. I also did another messure and the copy between partitions on the same VM takes 3 minutes.

    Read the article

  • How can I tell System Restore in WIndows 7 recovery console to use my recovered backup drive's restore point data?

    - by Rich Shealer
    My Windows 7 desktop PC failed to boot. It would get to a grayish screen with a mouse and would only respond to the power button. After much examination I found that the problem was not a failed drive as running CHKDSK from the Recovery Console on my main drives passed without any errors. I had been installing various Java version in the days before the failure so I decided to use a restore point to roll backwards. I have an external SATA drive controller with two 2 TB drives mirrored using the Windows mirroring function. My system has been backing up to this drive regularly. The problem is I accidently broke the mirror when testing to see if this drive system might have been causing my boot issue. Connecting it to another machine showed two dynamic drives that were invalid. In the end I reformatted one as an NTFS basic disc and used recovery software on the other to copy all of the files to the reformatted drive. I had to copy the restore points into the new drive's System Volume Information folder by granting rights to that user. I moved the drive back to the original machine and rebooted. I can see my new drive, it even uses the same drive letter as it did in normal mode. Running System Restore it lists a new Automatic Restore point created while sitting at the RC along with all of my backups. Selecting the backup I want (or any other) I get a dialog. The backup drive could not be found. System Restore is looking for restore points on your backup. Make sure the backup drive is on and connected to this computer and then click OK. What do I need to do to allow system restore to see the restore points?

    Read the article

  • windows server 2003 speed issues

    - by farzinSH
    I have a HP server with windows server 2003 and 50 windows XP clients. Since a week and a half the networks speed suddenly drop 2-3 times per day. It gets so slow that none of the clients could work with the HIS program installed on them. We tried so many different things such as replacing the hubs,switches and even some wires. Every time one of these changes solves the problem and the network goes back to its normal state. I checked everything. Even when I disconnected all the clients from the server and connected it to just one computer the problem still remained for 2 hours. I just narrowed down the problem to the couple of likely speculations as follows: viruses? (Updated Kaspersky running on the server shows none) server hardware failure? Physical memory usage on the server? (Because the last time the problem occurred none of the changes above solved the issue so I restarted the server an checked the physical memory usage which was 2 GBs. But I noticed it's increasing over time to over 9 GBs...the server has 16 GBs of RAM.) I surfed the internet and got nothing. Any help would do us a lot....thanks in advance

    Read the article

  • Did Windows 7 Startup Repair trash My Documents?

    - by Metaphile
    Earlier today, I rebooted my computer. Partway through the boot process, it shut down suddenly. When I tried again, I was prompted to run Startup Repair, and I did. Afterwards, my computer booted normally and everything seemed to be in order. Then I noticed that my My Documents folder contains a mix of old and new files. On closer inspection, it appears that Windows has reverted my system to a previous state. Two things puzzle me: 1) According to Microsoft, "System Restore does not affect personal files, such as e-mail, documents, or photos [...]", yet many of my personal files have been affected. 2) Why were some things reverted, but not others? I had recently reorganized a bunch of files in My Documents. The reverted directory structure seems to be a hybrid of old a new, with a lot of new stuff missing. It's hard to say for sure, but it looks like the stuff that's missing would have been in conflict (two folders with the same name, for example), and Windows favored the old stuff. Is this normal behavior for Startup Repair/System Restore? To modify personal files, I mean? Is there a pattern to the mess it's made of My Documents?

    Read the article

  • [openVPN] server & client on same machine . And multiple VPN servers

    - by HiWorld
    Hello everyone, im stucked configuring openvpn to build a multi vpn connection. like this: CLIENT - VPN1 - VPN2 - INTERNET Well, i already have and know how to done a normal sigle vpn but want to use a chain of vpns, so i explain what i have done and how i did it. ON VPN1. i have 1 openvpn instance running as server( where client connect to) and another as client connecting to VPN2 running as server. { Here comes the problem } when i connect VPN1 as client of VPN2 i cant connect to VPN1 from CLIENT, my question is HOW TO procced with this... Also have another third instance working as server to use VPN1 without chains. ON VPN2. 1 openvpn instance as server where VPN1 will connect and then forward to the NET. Im using TUN interface on configs. And iptables are on this way: VPN1 - openvpn ip server1 : 192.168.6.0 / ip as client of VPN2: 192.168.5.70 iptables -t nat -A POSTROUTING -s 192.168.6.0 -j SNAT --to-source 192.168.5.70 VPN2 - openvpn ip server2 : 192.168.5.0 iptables -t nat -A POSTROUTING -s 192.168.5.0/24 -j SNAT --to-source EXTERNAL_IP_TO_INTERNET Hope someone help me with this. thanks in advance

    Read the article

  • Can I recover a non-system disk deleted during 2008 R2 setup?

    - by serialhobbyist
    I've done a truly stupid thing and 'deleted' the data disk on a Server 2008 R2 box. Can I recover it? If so, how? I was rebuilding the box because a motherboard change had broken things. I've built loads of boxes and was going through the standard stuff without much concentration. I got to the disk screen which normally displays the two paritions on the drive: the recovery one and the system one. As normal, I deleted the two things I saw. It was only when two lots of unallocated space didn't merge into one that the full horror of what I'd done hit me. Yes, I've got backups... of the stuff I have space to back up. The real irony is that, earlier in the day, I'd ordered to 1 TB disks to deal with the problem. So, anyway, I'd really like to get this partition back because it'll save me a lot of time. How can I do it?

    Read the article

  • I can get in, but I can't get out

    - by robwilkerson
    Like most technical folks, I suppose, I'm my family's primary source of tech support. I'm a developer--not a sysadmin--by trade and tonight I bumped into something I've never seen before. I'm hoping someone here has. In order to better help my Mom, I have her set up on a home network behind a Linksys router (WRT54G). She's got a Mac, so I have her router set up to forward SSH requests to her laptop's internal IP. I also have her router running DDNS through DynDns. Tonight she called to tell me that she can't access the Internet. Assuming it was one of the many simple, stupid problems most of us encounter with parents, I logged into the router admin remotely and took a look around. Everything looked normal. Then I SSH'd into her machine to check out her IP, DNS, etc. settings. Everything still looked fine. Then I noticed something weird. When SSH'd into her machine, I can't ping her router. In other words, I seem to be able to access her computer through her router, but not access her router from her computer. A traceroute dies immediately as well. Any ideas what I might try next? I've bounced her computer and even unplugged her router (it was plugged back in, of course). Thanks.

    Read the article

  • I can get in, but I can't get out

    - by robwilkerson
    Like most technical folks, I suppose, I'm my family's primary source of tech support. I'm a developer--not a sysadmin--by trade and tonight I bumped into something I've never seen before. I'm hoping someone here has. In order to better help my Mom, I have her set up on a home network behind a Linksys router (WRT54G). She's got a Mac, so I have her router set up to forward SSH requests to her laptop's internal IP. I also have her router running DDNS through DynDns. Tonight she called to tell me that she can't access the Internet. Assuming it was one of the many simple, stupid problems most of us encounter with parents, I logged into the router admin remotely and took a look around. Everything looked normal. Then I SSH'd into her machine to check out her IP, DNS, etc. settings. Everything still looked fine. Then I noticed something weird. When SSH'd into her machine, I can't ping her router. In other words, I seem to be able to access her computer through her router, but not access her router from her computer. A traceroute dies immediately as well. Any ideas what I might try next? I've bounced her computer and even unplugged her router (it was plugged back in, of course). Thanks.

    Read the article

  • Ethernet/8P8C crimp contacts bent

    - by Fire Lancer
    (if anyone knows correct terminology please correct). Ive got a (fairly large) number of existing Ethernet cables that over the years many have got damaged connector clips, so got a crimp tool and some new connectors for them. However out of all 4 attempts I have tried, on crimping 2+ of the little copper contacts that bite into the wires have instead just bent to one side, and so gone between the gaps in in the crimp tool... Unless this really is me doing something wrong (what?) I am inclined to blame the hardware, but is this the crimper or the new connectors I got? I tried to take a picture, as you can just about see looking from the left 3rd, 6th, 7th and 8th pins didn't get pushed in, and so don't form a connector. Unfortunately my camera was barely able to focus on it and then this website converted it to a JPEG... Update: Connectors/Cable/Tools: The wires are stranded (looks about 6 and no evidence of being aluminum/not copper), and the pins(?) have 2 little flat spikes lengthways along the cables (I understand to dig into it, while solid core connectors would have like 2 plates designed to go around the core?). Crimper was http://www.amazon.co.uk/gp/product/B0013EXTKK/ref=oh_aui_detailpage_o02_s00?ie=UTF8&psc=1 (seemed to be highly rated, I already had tools for cutting/stripping). Update2: Picture of crimp "prongs" (?) Update3: Side picture of connector Update4: Comparison with old connector. The top (used) connector is one from a few years back (different tool and connectors), the thing that concerns me that it might not be the tool I need to replace is just how thin the pins are on the new one that maybe a tool could legitimately bend some into a gap rather than pushing them in fully? In fact I can move individual pins to the sides significantly with my fingernail, is that normal?

    Read the article

  • Cause of laptop only booting once every 3-10 times

    - by user16441
    My 3-year old Asus EEE laptop has been working flawlessly in the past, but in the last week it has started behaving oddly. When I boot, one of the following happens: Screen remains totally black. Absolutely nothing comes up. I need to retry booting. (Happens 70% of the time). Laptop starts regular boot. However, somewhere during the boot process it goes black and I need to retry (20% of the time). Screen turning black is more likely when I move the laptop around during boot. Laptop boots properly. Once booted, all is fine. But I'm afraid to turn it off and have been keeping the laptop running till the night. Additional details: All normal lights light up when I boot; whichever of the scenarios goes down. There are no odd sounds or beeps, whichever of the scenarios goes down. I thought it might be the SSD drive that is dying, but it does not use SMART and it appears difficult to troubleshoot. I tried booting with and without the battery, but the scenarios are identical. Before completely investigating the hard drive, I'd like to hear opinions regarding the cause of this problem. Is this most likely the hard drive, or could there be another cause for these symptoms? Using Arch Linux.

    Read the article

  • Folder doesn't show up in explorer, cmd, and python even though I can access it, how can I fix this?

    - by Miebster
    I am accessing another computer on the network using a mapped network drive. The path looks like \\192.168.0.100\d$ which is mapped to my computer's "m" drive. I can access, view, create, delete, move, etc folders on this drive. However, some folders don't show up in windows explorer, even tho I can access them. Example: Lets say that M:\stuff\more_stuff is a directory. What I can't do: When windows explorer is pointed at M:\stuff I can't see more_stuff In cmd prompt pointed at M:\stuff "dir" doesn't find more_stuff In cmd prompt pointed at M:\stuff "dir /a" doens't find more_stuff In python, os.listdir at M:\stuff doens't find more_stuff What I can do: Typing M:\stuff\more_stuff into the address bar lets me access the folder like normal. Because there is no indication that this folder even exists, there could be more like them. I have no way of knowing how many folders are magically hidden on this mapped drive. What are some steps I can do to figure out why this folder is hidden? (With the end goal of making it no longer hidden).

    Read the article

  • ActionLink Problem with Client Template Telerik MVC grid

    - by Tassadaque
    Hi, i m using Telerik grid to present memos received by user below is the code <%Html.Telerik().Grid<UserManagement.Models.SentMemos>() .Name("ReceivedMemos") .Sortable(sorting => sorting .OrderBy(sortOrder => sortOrder.Add(o => o.MemoDate).Descending())) .DataBinding(dataBinding => dataBinding //Ajax binding .Ajax() //The action method which will return JSON .Select("_AjaxBindingReceivedMemos", "OA" ) ). Columns(colums => { colums.Bound(o => o.MemoID).ClientTemplate(Html.ActionLink("Reply", "ReplyMemo", "OA", new { MemoID = "<#=MemoID#>"}, null).ToString()).Title("Reply").Filterable(false).Sortable(false); colums.Bound(o => o.MemoID).ClientTemplate(Html.ActionLink("Acknowledge", "PreviewMemo", "OA", new { id = "<#=MemoID#>"}, null).ToString()).Title("Acknowledge").Filterable(false).Sortable(false); colums.Bound(o => o.Subject).ClientTemplate(Html.ActionLink("<%#=Subject#>", "PreviewMemo", "OA", new { id = "<#=MemoID#>" }, null).ToString()).Title("Subject"); //colums.Bound(o => Html.ActionLink(o.Subject,"PreviewMemo","OA",new{id=o.MemoID},null).ToString()).Title("Subject"); colums.Bound(o => o.FromEmployeeName); colums.Bound(o => o.MemoDate); }) .Sortable() .Filterable() .RowAction((row) => { row.HtmlAttributes.Add("style", "background:#321211;"); }) .Pageable(pager=>pager.PageSize(6)) .PrefixUrlParameters(false) //.ClientEvents(events => events.OnRowDataBound("onRowDataBound")) .Render(); %> where i m binding third column (Subject) my intention is to make an ActionLink where subject is the display text and i want a dynamic ID coming from <#=MemoID#. memo id is working fine and gives me a link with dynamic Memo IDs. the problem is with the subject i.e ("<#=Subject#") is rendered as it is on the screen without mapping to the actual subject of the memo. i have also tried ("<%#=Subject%") but to no gain. any help is highly appriciated regards

    Read the article

  • Core Data migration failing with error: Failed to save new store after first pass of migration

    - by unforgiven
    In the past I had already implemented successfully automatic migration from version 1 of my data model to version 2. Now, using SDK 3.1.3, migrating from version 2 to version 3 fails with the following error: Unresolved error Error Domain=NSCocoaErrorDomain Code=134110 UserInfo=0x5363360 "Operation could not be completed. (Cocoa error 134110.)", { NSUnderlyingError = Error Domain=NSCocoaErrorDomain Code=256 UserInfo=0x53622b0 "Operation could not be completed. (Cocoa error 256.)"; reason = "Failed to save new store after first pass of migration."; } I have tried automatic migration using NSMigratePersistentStoresAutomaticallyOption and NSInferMappingModelAutomaticallyOption and also migration using only NSMigratePersistentStoresAutomaticallyOption, providing a mapping model from v2 to v3. I see the above error logged, and no object is available in the application. However, if I quit the application and reopen it, everything is in place and working. The Core Data methods I am using are the following ones - (NSManagedObjectModel *)managedObjectModel { if (managedObjectModel != nil) { return managedObjectModel; } NSString *path = [[NSBundle mainBundle] pathForResource:@"MYAPP" ofType:@"momd"]; NSURL *momURL = [NSURL fileURLWithPath:path]; managedObjectModel = [[NSManagedObjectModel alloc] initWithContentsOfURL:momURL]; return managedObjectModel; } - (NSManagedObjectContext *) managedObjectContext { if (managedObjectContext != nil) { return managedObjectContext; } NSPersistentStoreCoordinator *coordinator = [self persistentStoreCoordinator]; if (coordinator != nil) { managedObjectContext = [[NSManagedObjectContext alloc] init]; [managedObjectContext setPersistentStoreCoordinator: coordinator]; } return managedObjectContext; } - (NSPersistentStoreCoordinator *)persistentStoreCoordinator { if (persistentStoreCoordinator != nil) { return persistentStoreCoordinator; } NSURL *storeUrl = [NSURL fileURLWithPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"MYAPP.sqlite"]]; NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption, [NSNumber numberWithBool:YES], NSInferMappingModelAutomaticallyOption, nil]; NSError *error = nil; persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel: [self managedObjectModel]]; if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:options error:&error]) { // Handle error NSLog(@"Unresolved error %@, %@", error, [error userInfo]); } return persistentStoreCoordinator; } In the simulator, I see that this generates a MYAPP~.sqlite files and a MYAPP.sqlite file. I tried to remove the MYAPP~.sqlite file, but BOOL oldExists = [[NSFileManager defaultManager] fileExistsAtPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"MYAPP~.sqlite"]]; always returns NO. Any clue? Am I doing something wrong? Thank you in advance.

    Read the article

  • NHibernate.NHibernateException: Unable to locate row for retrieval of generated properties: [MyNames

    - by Brad Heller
    It looks like all of my mappings are compiling correctly, I'm able to validly get a session from session factory. However, when I try to ISession.SaveOrUpdate(obj); I get this. Can anyone please help point me in the right direction? private Configuration configuration; protected Configuration Configuration { get { configuration = configuration ?? GetNewConfiguration(); return configuration; } } protected ISession GetNewSession() { var sessionFactory = Configuration.BuildSessionFactory(); var session = sessionFactory.OpenSession(); return session; } [TestMethod] public void TestSessionSave() { var company = new Company(); company.Name = "Test Company 1"; DateTime savedAt = DateTime.Now; using (var session = GetNewSession()) { try { session.SaveOrUpdate(company); } catch (Exception e) { throw e; } } Assert.IsTrue((company.CreationDate != null && company.CreationDate > savedAt), "Company was saved, creation date was set."); } For those that might be interested, here is my mapping for this class: <!-- Company --> <class name="MyNamespace.Company,MyLibrary" table="Companies"> <id name="Id" column="Id"> <generator class="native" /> </id> <property name="ExternalId" column="GUID" generated="insert" /> <property name="Name" column="Name" type="string" /> <property name="CreationDate" column="CreationDate" generated="insert" /> <property name="UpdatedDate" column="UpdatedDate" generated="always" /> </class> <!-- End Company --> Finally, here is my config -- I'm just connecting to a SQL Server CE instance for this for testing purposes. <?xml version="1.0" encoding="utf-8" ?> <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory name=""> <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property> <property name="connection.driver_class">NHibernate.Driver.SqlServerCeDriver</property> <property name="connection.connection_string">Data Source="D:\Build\MyProject\Source\MyProject.UnitTests\MyProject.TestDatabase.sdf"</property> <property name="dialect">NHibernate.Dialect.MsSqlCeDialect</property> <property name="proxyfactory.factory_class">NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle</property> </session-factory> </hibernate-configuration> Thanks!

    Read the article

  • Nhibernate 2.1 and mysql 5 - InvalidCastException on Setup

    - by Nash
    Hello there, I am trying to use NHibernate with Spring.Net und mySQL 5. However, when setting up the connection and creating the SessionFactoryObject, I get this InvalidCastException: NHibernate seems to cast MySql.Data.MySqlClient.MySqlConnection to System.Data.Common.DbConnection which causes the exception. System.InvalidCastException wurde nicht behandelt. Message="Das Objekt des Typs \"MySql.Data.MySqlClient.MySqlConnection\" kann nicht in Typ \"System.Data.Common.DbConnection\" umgewandelt werden." Source="NHibernate" StackTrace: bei NHibernate.Tool.hbm2ddl.SuppliedConnectionProviderConnectionHelper.Prepare() in c:\CSharp\NH\nhibernate\src\NHibernate\Tool\hbm2ddl\SuppliedConnectionProviderConnectionHelper.cs:Zeile 25. bei NHibernate.Tool.hbm2ddl.SchemaMetadataUpdater.GetReservedWords(Dialect dialect, IConnectionHelper connectionHelper) in c:\CSharp\NH\nhibernate\src\NHibernate\Tool\hbm2ddl\SchemaMetadataUpdater.cs:Zeile 43. bei NHibernate.Tool.hbm2ddl.SchemaMetadataUpdater.Update(ISessionFactory sessionFactory) in c:\CSharp\NH\nhibernate\src\NHibernate\Tool\hbm2ddl\SchemaMetadataUpdater.cs:Zeile 17. bei NHibernate.Impl.SessionFactoryImpl..ctor(Configuration cfg, IMapping mapping, Settings settings, EventListeners listeners) in c:\CSharp\NH\nhibernate\src\NHibernate\Impl\SessionFactoryImpl.cs:Zeile 169. bei NHibernate.Cfg.Configuration.BuildSessionFactory() in c:\CSharp\NH\nhibernate\src\NHibernate\Cfg\Configuration.cs:Zeile 1090. bei OrmTest.Program.Main(String[] args) in C:\Users\Max\Documents\Visual Studio 2008\Projects\OrmTest\OrmTest\Program.cs:Zeile 24. bei System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) bei System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) bei Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() bei System.Threading.ThreadHelper.ThreadStart_Context(Object state) bei System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) bei System.Threading.ThreadHelper.ThreadStart() InnerException: I am using the programmatically setup approach in order to get a quick NHibernate Setup. Here is the setup Code: Configuration config = new Configuration(); Dictionary props = new Dictionary(); props.Add("dialect", "NHibernate.Dialect.MySQL5Dialect"); props.Add("connection.provider", "NHibernate.Connection.DriverConnectionProvider"); props.Add("connection.driver_class", "NHibernate.Driver.MySqlDataDriver"); props.Add("connection.connection_string", "Server=localhost;Database=orm_test;User ID=root;Password=password"); props.Add("proxyfactory.factory_class", "NHibernate.ByteCode.Spring.ProxyFactoryFactory, NHibernate.ByteCode.Spring"); config.AddProperties(props); config.AddFile("Person.hbm.xml"); ISessionFactory factory = config.BuildSessionFactory(); ISession session = factory.OpenSession(); Is something missing? I downloaded the current mysql Connector from the mysql Website.

    Read the article

  • SSRS Report from Oracle DB - Use stored procedure

    - by Emtucifor
    I am developing a report in Sql Server Reporting Services 2005, connecting to an Oracle 11g database. As you post replies perhaps it will help to know that I'm skilled in MSSQL Server and inexperienced in Oracle. I have multiple nested subreports and need to use summary data in outer reports and the same data but in detail in the inner reports. In order to spare the DB server from multiple executions, I thought to populate some temp tables at the beginning and then query just them the multiple times in the report and the subreports. In SSRS, Datasets are evidently executed in the order they appear in the RDL file. And you can have a dataset that doesn't return a rowset. So I created a stored procedure to populate my four temp tables and made this the first Dataset in my report. This SP works when I run it from SQLDeveloper and I can query the data from the temp tables. However, this didn't appear to work out because SSRS was apparently not reusing the same session, so even though the global temporary tables were created with ON COMMIT PRESERVE ROWS my Datasets were empty. I switched to using "real" tables and am now passing in an additional parameter, a GUID in string form, uniquely generated on each new execution, that is part of the primary key of each table, so I can get back just the rows for this execution. Running this from Sql Developer works fine, example: DECLARE ActivityCode varchar2(15) := '1208-0916 '; ExecutionID varchar2(32) := SYS_GUID(); BEGIN CIPProjectBudget (ActivityCode, ExecutionID); END; Never mind that in this example I don't know the GUID, this simply proves it works because rows are inserted to my four tables. But in the SSRS report, I'm still getting no rows in my Datasets and SQL Developer confirms no rows are being inserted. So I'm thinking along the lines of: Oracle uses implicit transactions and my changes aren't getting committed? Even though I can prove that the non-rowset returning SP is executing (because if I leave out the parameter mapping it complains at report rendering time about not having enough parameters) perhaps it's not really executing. Somehow. Wrong execution order isn't the problem or rows would appear in the tables, and they aren't. I'm interested in any ideas about how to accomplish this (especially the part about not running the main queries multiple times). I'll redesign my whole report. I'll stop using a stored procedure. Suggest anything you like! I just need help getting this working and I am stuck. If you want more details, in my SSRS report I have a List object (it's a container that repeats once for each row in a Dataset) that has some header values and then contains a subreport. Eventually, there will be four total reports: one main report, with three nested subreports. Each subreport will be in a List on the parent report.

    Read the article

  • How do I use PackageManager.addPreferredActivity()?

    - by afonseca
    In SDK 1.5 I was using the PackageManager class to set the preferred home screen to be my app using PackageManager.addPackageToPreferred(). In the new SDK (using 2.1) this has been deprecated so I'm trying to use addPreferredActivity() for the same result but it's not working as expected. Some necessary background. I'm writing a lock screen replacement app so I want the home key to launch my app (which will already be running, hence having the effect of disabling the key). When the user "unlocks" the screen I intend to restore the mapping so everything works as normal. In my AndroidManifest.xml I have: <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> <category android:name="android.intent.category.HOME"/> <category android:name="android.intent.category.DEFAULT" /> </intent-filter> <uses-permission android:name="android.permission.SET_PREFERRED_APPLICATIONS"> </uses-permission> In my code I have the following snippet: // Set as home activity // This is done so we can appear to disable the Home key. PackageManager pm = getPackageManager(); //pm.addPackageToPreferred(getPackageName()); IntentFilter filter = new IntentFilter("android.intent.action.MAIN"); filter.addCategory("android.intent.category.HOME"); filter.addCategory("android.intent.category.DEFAULT"); ComponentName[] components = new ComponentName[] { new ComponentName("com.android.launcher", ".Launcher") }; Context context = getApplicationContext(); ComponentName component = new ComponentName(context.getPackageName(), MyApp.class.getName()); pm.clearPackagePreferredActivities("com.android.launcher"); pm.addPreferredActivity(filter, IntentFilter.MATCH_CATEGORY_EMPTY, components, component); The resulting behavior is that the app chooser comes up when I press the Home key, which indicates that the clearPackagePreferredActivities() call worked but my app did not get added as the preferred. Also, the first line in the log below says something about "dropping preferred activity for Intent": 04-06 02:34:42.379: INFO/PackageManager(1017): Result set changed, dropping preferred activity for Intent { act=android.intent.action.MAIN cat=[android.intent.category.HOME] flg=0x10200000 } type null 04-06 02:34:42.379: INFO/ActivityManager(1017): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.HOME] flg=0x10200000 cmp=android/com.android.internal.app.ResolverActivity } Does anyone know what this first log message means? Maybe I'm not using the API correctly, any ideas? Any help would be greatly appreciated.

    Read the article

  • Stop lazy loading or skip loading a property in NHibernate? Proxy cannot be serialized through WCF

    - by HelloSam
    Consider I have a parent, child relationship class and mapping. I am using NHibernate to read the object from the database, and intended to use WCF to send the object across the wire. Goal For reading the parent object, I want to selectively, at different execution path, decide when I would want to load the child object. Because I don't want to read more than what I needed. Those partially loaded object must be able to sent through WCF. When I mean I don't load it, neither side will access such property. Problem When such partially loaded object is being sent through WCF, as those property is marked as [DataContract], it cannot be serialized as the property is lazy load proxy instead of real known type. What I want to archive, or solution that I can think of lazy=false or lazy=true doesn't work. Former will eagerly fetch all the relationships, latter will create a proxy. But I want nothing instead - a null would be the best. I don't need lazy load. I hope to get a null for those references that I don't want to fetch. A null, but not just a proxy. This will makes WCF happy, and waste less time to have a lazy-load proxy constructed. Like could I have a null proxy factory? -OR- Or making WCF ignoring those property that's a proxy instead of real. I tried the IDataContractSurrogate solution, but only parent is passed to GetObjectToSerialize, I never observe an proxy being passed through GetObjectToSerialize, leaving no chance to un-proxy it. Edit After reading the comments, more surfing on the Internet... It seems to me that DTO would shift major part of the computation to the server side. But for the project I am working on, 50% of time the client is "smarter" than the server and the server is more like a data store with validation and verification. Though I agree the server is not exactly dumb - I have to decide when to fetch the extra references already, and DTO will make this very explicit. Maybe I should just take the pain. I didn't know http://automapper.codeplex.com/ before, this motivates me a little more to take the pain. On the other hand, I found http://trentacular.com/2009/08/how-to-use-nhibernate-lazy-initializing-proxies-with-web-services-or-wcf/, which seems to be working with IDataContractSurrogate.GetObjectToSerialize.

    Read the article

< Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >