Search Results

Search found 17437 results on 698 pages for 'nick long'.

Page 653/698 | < Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >

  • What would cause Memcached to Hang for 2+ seconds?

    - by Brad Dwyer
    I'm going nuts trying to scale memcached. From their site: Memcached operations are almost all O(1). Connecting to it and issuing a get or stat command should never lag. If connecting lags, you may be hitting the max connections limit. See ServerMaint for details on stats to monitor. If issuing commands lags, you can have a number of tuning problems. Most common are hardware problems, not enough RAM (swapping), network problems (bandwidth, dropped packets, half-duplex connections). On rare occasion OS bugs or memcached bugs can contribute. Well.. it is most certainly not performing like an O(1) operation for me. Under low to normal load on our site memcached response times for get and set ops are about 0.001 seconds. Not bad. But if we triple the load we get outliers that take 100x (or in rare cases 1000x!) that long. I even had one instance where it took 2.2442 seconds for memcached to store a value. Obviously this is killing our site. Here's the output of Memcached-getStats during one of the slow periods: [pid] => 18079 [uptime] => 8903 [threads] => 4 [time] => 1332795759 [pointer_size] => 32 [rusage_user_seconds] => 26 [rusage_user_microseconds] => 503872 [rusage_system_seconds] => 125 [rusage_system_microseconds] => 477008 [curr_items] => 42099 [total_items] => 422500 [limit_maxbytes] => 943718400 [curr_connections] => 84 [total_connections] => 4946 [connection_structures] => 178 [bytes] => 7259957 [cmd_get] => 1679091 [cmd_set] => 351809 [get_hits] => 1662048 [get_misses] => 17043 [evictions] => 0 [bytes_read] => 109388476 [bytes_written] => 3187646458 [version] => 1.4.13 So things that I have ruled out so far are: Hitting the max connections limit (curr_connections of 84 is well below the default of max of 1024) Swapping - the machine has 900M out of 1024M of memory dedicated to memcached on a dedicated machine. It only appears to be using about 7MB of data as per the bytes stat. How would I diagnose the other hardware problems? prstat doesn't really show a whole lot going on in terms of CPU or memory usage. Not sure how to figure out the network problems but as this is a dedicated server on the same private network as the web box I don't think it's a connectivity issue (ping is less than a millisecond between the boxes). Is there something else I'm missing here? It's driving me nuts. Edit: Also forgot to mention that I've tried both persistent and non-persistent connections with minimal-to-no impact.

    Read the article

  • File Server - Storage configuration: RAID vs LVM vs ZFS something else... ?

    - by privatehuff
    We are a small company that does video editing, among other things, and need a place to keep backup copies of large media files and make it easy to share them. I've got a box set up with Ubuntu Server and 4 x 500 GB drives. They're currently set up with Samba as four shared folders that Mac/Windows workstations can see fine, but I want a better solution. There are two major reasons for this: 500 GB is not really big enough (some projects are larger) It is cumbersome to manage the current setup, because individual hard drives have different amounts of free space and duplicated data (for backup). It is confusing now and that will only get worse once there are multiple servers. ("the project is on sever2 in share4" etc) So, I need a way to combine hard drives in such a way as to avoid complete data loss with the failure of a single drive, and so users see only a single share on each server. I've done linux software RAID5 and had a bad experience with it, but would try it again. LVM looks ok but it seems like no one uses it. ZFS seems interesting but it is relatively "new". What is the most efficient and least risky way to to combine the hdd's that is convenient for my users? Edit: The Goal here is basically to create servers that contain an arbitrary number of hard drives but limit complexity from an end-user perspective. (i.e. they see one "folder" per server) Backing up data is not an issue here, but how each solution responds to hardware failure is a serious concern. That is why I lump RAID, LVM, ZFS, and who-knows-what together. My prior experience with RAID5 was also on an Ubuntu Server box and there was a tricky and unlikely set of circumstances that led to complete data loss. I could avoid that again but was left with a feeling that I was adding an unnecessary additional point of failure to the system. I haven't used RAID10 but we are on commodity hardware and the most data drives per box is pretty much fixed at 6. We've got a lot of 500 GB drives and 1.5 TB is pretty small. (Still an option for at least one server, however) I have no experience with LVM and have read conflicting reports on how it handles drive failure. If a (non-striped) LVM setup could handle a single drive failing and only loose whichever files had a portion stored on that drive (and stored most files on a single drive only) we could even live with that. But as long as I have to learn something totally new, I may as well go all the way to ZFS. Unlike LVM, though, I would also have to change my operating system (?) so that increases the distance between where I am and where I want to be. I used a version of solaris at uni and wouldn't mind it terribly, though. On the other end on the IT spectrum, I think I may also explore FreeNAS and/or Openfiler, but that doesn't really solve the how-to-combine-drives issue.

    Read the article

  • What seems to be plaguing my hard drives?

    - by Craig
    In a little bit of a tech nightmare here. I know oodles about software, not so much more than the above average user about hardware. I recently had to toss an old desktop of mine. It was gradually getting slower and slower, and after shutting the desktop down for long periods of time, it would choke up upon startup. Sometimes it'd give a disk read error, sometimes say no OS was found, etc. Restarting it about 5-15 times would eventually boot properly. Weird. I also noticed that startup programs were going missing, Dropbox was reindexing my entire folder, and Backblaze was backing up less than the number of files that it should. This lead me to believe it was probably a hard drive issue. I began to wonder why I'd have hard drive issues, and came to closure when I assumed it was because of recent power surges and outages. I'm sure that does a number on the drive. I bought a new desktop recently. It's not a beast or anything, but it's enough for what I do. It's an eMachines (I know, I know) Ultra-Slim (http://www.amazon.com/eMachines-Ultra-Slim-ER1401-57-Desktop-PC/dp/B00475OG9U). This is ideal for me because it's small and portable. It comes with an AC adapter and battery, like a laptop. Just to be safe, I bought an uninterruptable power supply on top of that. It's basically protected completely from any outages that might scramble the drive. I set this up a few days ago and for the past few days I've been perfecting settings, downloading the usual applications, etc. Two days ago, I noticed Dropbox was reindexing my entire Dropbox folder. I installed both Dropbox and Backblaze on this system, but it is very much more lightweight than the other. Only about 15 third-party applications installed. I thought that maybe Dropbox and Backblaze were stressing my system, so I turned off Backblaze. Still, Dropbox runs and comes across this infinite reindex issue. I noticed that upon a reboot recently, two applications did not start on startup either. Also, much like my old desktop, every 3rd or 4th reboot, I'll be forced into a chkdsk. This makes me incredibly nervous. What could possibly be going wrong with my old, years-old desktop that is immediately causing the same issues to my new one? I've considered all of the basics. I'm in a very air conditioned room. I take run routine virus scans. I'd like to think I take care of my systems very well. What is this issue that is haunting me? There's always the possibility that this new desktop has a junk hard drive, but it just seems way too coincidental.

    Read the article

  • How to use Public IP in case of two ISP when two differs from each other

    - by user1471995
    Please bare with my long explanation but this is important to explain the actual problem. Please also pardon my knowledge with PFsense as i am new to this. I have single PFSense box with 3 Ethernet adapter. Before moving to configuration for these, i want to let you know i have two Ethernet based Internet Leased Line Connectivity let's call them ISP A and ISP B. Then last inetrface is LAN which is connected to network switch. Typical network diagram ISP A ----- PFSense ----> Switch ---- > Servers ISP B ----- ISP A (Initially Purchased) WAN IP:- 113.193.X.X /29 Gateway IP :- 113.193.X.A and other 4 usable public IP in same subnet(So the gateway for those IP are also same). ISP B (Recently Purchased) WAN IP:- 115.115.X.X /30 Gateway IP :- 115.115.X.B and other 5 usable public IP in different subnet(So the gateway for those IP is different), for example if 115.119.X.X2 is one of the IP from that list then the gateway for this IP is 115.119.X.X1. Configuration for 3 Interfaces Interface : WAN Network Port : nfe0 Type : Static IP Address : 113.193.X.X /29 Gateway : 113.193.X.A Interface : LAN Network Port : vr0 Type : Static IP Address : 192.168.1.1 /24 Gateway : None Interface : RELWAN Network Port : rl0 Type : Static IP Address : 115.115.X.X /30 (I am not sure of the subnet) Gateway : 115.115.X.B To use Public IP from ISP A i have done following steps a) Created Virtual IP using either ARP or IP Alias. b) Using Firewall: NAT: Port Forward i have created specific natting from one public IP to my internal Lan private IP for example :- WAN TCP/UDP * * 113.193.X.X1 53 (DNS) 192.168.1.5 53 (DNS) WAN TCP/UDP * * 113.193.X.X1 80 (HTTP) 192.168.1.5 80 (HTTP) WAN TCP * * 113.193.X.X2 80 (HTTP) 192.168.1.7 80 (HTTP) etc., c) Current state for Firewall: NAT: Outbound is Manual and whatever default rule are defined for the WAN those are only present. d) If this section in relevant then for Firewall: Rules at WAN tab then following default rule has been generated. * RFC 1918 networks * * * * * Block private networks * Reserved/not assigned by IANA * * * * * * To use Public IP from ISP B i have done following steps a) Created Virtual IP using either ARP or IP Alias. b) Using Firewall: NAT: Port Forward i have created specific natting from one public IP to my internal Lan private IP for example :- RELWAN TCP/UDP * * 115.119.116.X.X1 80 (HTTP) 192.168.1.11 80 (HTTP) c) Current state for Firewall: NAT: Outbound is Manual and whatever default rule are defined for the RELWAN those are only present. d) If this section in relevant then for Firewall: Rules at RELWAN tab then following default rule has been generated. * RFC 1918 networks * * * * * * Reserved/not assigned by IANA * * * * * * Last thing before my actual query is to make you aware that to have multiple Wan setup i have done following steps a) Under System: Gateways at Groups Tab i have created new group as following MultipleGateway WANGW, RELWAN Tier 2,Tier 1 Multiple Gateway Test b) Then Under Firewall: Rules at LAN tab i have created a rule for internal traffic as follows * LAN net * * * MultipleGateway none c) This setup works if unplug first ISP traffic start routing using ISP 2 and vice-versa. Now my main query and problem is i am not able to use public IP address allocated by ISP B, i have tried many small tweaks but not successful in anyone. The notable difference between the two ISP is a) In case of ISP A there Public usable IP address are on same subnet so the gateway used for the WAN ip is same for the other public IP address. b) In case of ISP B there public usable IP address are on different subnet so the obvious the gateway IP for them is different from WAN gateway's IP. Please let me know how to use ISP B public usable IP address, in future also i am going to rely for more IPs from ISP B only.

    Read the article

  • How can I recover an ext4 filesystem corrupted after a fsck?

    - by Regan
    I have an ext4 filesystem on luks over software raid5. The filesystem was operating "just fine" for several years when I was beginning to run out of space. I had a 9T volume on 6x2T drives. I began upgrading to 3T drives by doing the mdadm fail, remove, add, rebuild, repeat process until I had a larger array. I then grew the luks container, and then when I unmounted and tried to resize2fs I was given the message the filesystem was dirty and needed e2fsck. Without thinking I just did e2fsck -y /dev/mapper/candybox and it began spewing all kinds of inode being removed type messages (can't remember exactly) I killed e2fsck and tried to remount the filesystem to backup data I was concerned about. When trying to mount at this point I get: # mount /dev/mapper/candybox /candybox mount: wrong fs type, bad option, bad superblock on /dev/mapper/candybox, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Looking back at my older logs I noticed the filesystem was giving this error each time the machine booted: kernel: [79137.275531] EXT4-fs (dm-2): warning: mounting fs with errors, running e2fsck is recommended So shame on me for not paying attention :( I then tried to mount using every backup superblock (one after another) and each attempt left this in my log: EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 0 failed (26534!=65440) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 1 failed (38021!=36729) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 2 failed (18336!=39845) ... EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 11911 failed (28743!=44098) BUG: soft lockup - CPU#0 stuck for 23s! [mount:2939] Attempts to restart e2fsck results in: # e2fsck /dev/mapper/candybox e2fsck 1.41.14 (22-Dec-2010) e2fsck: Group descriptors look bad... trying backup blocks... candy: recovering journal e2fsck: unable to set superblock flags on candy At this point, I decided it best to order some more drives and make an image using ddrescue Now two weeks later I have an image of the luks partition in a .img file. # ls -lh total 14T -rw-r--r-- 1 root root 14T Oct 25 01:57 candybox.img -rw-r--r-- 1 root root 271 Oct 20 14:32 candybox.logfile After numerous attempts using everything I could find online I could not coerce e2fsck to do anything on the image, so I used mkfs.ext4 -L candy candybox.img -m 0 -S and I was able to mount the dirty filesystem readonly without the journal and recover 960G of data. It gave all kinds of errors of various directories not existing and so forth but I was able to get some stuff. Which gave me some hope! I then ran e2fsck again and it had to recreate the root inode and gave a massive list of correcting group counts, I accepted the root inode creation and said no to everything else, leaving a completely empty filesystem. Re-ran again and said yes to all questions with the same result but now a "clean" but empty filesystem. extundelete gives me 0 recoverable inodes found. And now I'm stuck again, I can't come up with any other methods other than dropping to something like photorec which will give me an absolute mess with how large the filesystem was. I'm willing to re-copy the image from the original array and start over, if I can get any suggestions or ideas on a way to get more of my files back. I wish I could give more detailed logs of the commands that have run, but the output is long scrolled passed except for what gets logged to syslog and my memory is not as detailed due to the timeframe this has occurred over. Any help is greatly appreciated!

    Read the article

  • Choice of an OS for a home ZFS NAS

    - by OlafM
    I am preparing a home NAS with an old Athlon 64 X2 3800+, 4 GB ECC RAM, Asus M2V MX motherboard, and a single 3 TB WDC Green (another one as mirror may be installed in the future). It's the cheapest solution I found that includes ECC memory and the higher energy consumption is offset by the lower (zero) cost of acquisition. The system will be used for: music storage and stream to other desktop computers; storage of the scanned dia slides (3-4k slides, 180 MB TIFF each one plus reduced quality JPEG version); stream of these photos to a local iPad 2 (maybe Plex App? not yet sure); (one additional) remote backup via rsync/ssh or ZFS send/receive. It will be controlled via remote ssh, maybe VNC, no monitor attached. Absolute requirement is a reliable ZFS solution, plus the ability to easily install packets/software/virtual machines and to update remotely (I will be the admin and I don't live near the NAS). I have mainly three options: NAS4free/FreeNAS OpenIndiana Solaris Express 11 (yeah yeah I know the license requirements, I will write a perl script on it to count it as development machine). Problems: NAS4free/FreeNAS (I tested only NAS4free) required embedded installation for remote upgrading, but full install for easy addition of software packets. Since I need at least AirVideo Server (linux/win) and Plex App (win/linux) to stream the photos and some videos to iPad (they both require virtualbox), but I cannot be there to install updates, NAS4free/FreeNAS are excluded. http://www.nas4free.org/general_information.html explains the issue: embedded can be remotely updated, full cannot. Solaris has also another advantage: Crashplan client supports Solaris and I'm already using it for other backups. I would like to leave the option open, even if I will be doing backups probably through zfs send/receive. NexentaStor was left out because zfs send/receive are not included in the free version. The question is now Solaris 11 Express over OpenIndiana. To ease the management, I will be using http://www.napp-it.org Which one would you suggest and why? I found lots of informations and it's difficult for me to decide. I think (from the napp-it manual) that Solaris has some additional options for SMB shares, but are they really needed at home? I think I won't even use ACLs, since normal unix-style permissions are enough. OpenIndiana has maybe more frequent updates (Solaris offers only security updates between releases), but again, do I need them? I don't think so. Moreover, this is a NAS that has to work and nothing else, I cannot risk having problems that require me to access the server. Isn't OpenIndiana a bit more... cutting edge (in the Solaris world)? I'm just asking, no need to focus on this for the answer :-) I would limit myself to these two options (SE11.1/OI) also because I will be making a NAS for me in the future (where high performances with Mac shares are also required) and Solaris has kernel support for AFP. I will use this server to gather experience as well. After this long question, thanks in advance! If you need additional info, let me know and I will update this post. UPDATES Given the first answers, I will strongly suggest the person paying the hardware to insert a second HD. Better 2x2TB than 1x3TB (3 TB is oversized anyway). I was trying to keep the initial costs down to spread them over a longer period, but better having something good from the beginning.

    Read the article

  • Windows 8.1 Update 1 Disk Usage 100%

    - by Gookjin Jeong
    Background Information / Computer Specs I have a 14-inch Samsung Series 5 Ultra. Core i5 CPU, 750GB HDD, 8GB RAM, Intel HD Graphics 4000. I've had the computer for about 1.5 years with no major problems. Problem The issue appeared at the beginning of April this year, when I updated the OS to Windows 8.1 Update 1 (not from 8 to 8.1). After being on continually (except for at night, when I put it on sleep mode) for about 48 hours, the disk usage as seen by Task Manager hits 100%. When this happens, everything from opening/closing applications to typing and even bringing up the start screen by pressing the Windows key becomes extremely slow. The only way to make the disk usage decrease is to restart the computer. Then the problem repeats. I've used my current laptop (as well as my previous laptops) this way -- putting it on sleep mode at night and restarting it only when Windows needs to install updates -- for a long time. So I know the 100% disk usage is not due to the way I use the computer. The thing that causes the spike varies. Sometimes it's System, sometimes it's one of the various applications I installed (e.g. Chrome, Evernote, Spotify, Wunderlist, iTunes, etc.), and sometimes it's Antimalware Service Executable, etc. Tried Solutions I think I tried almost every solution out there for this problem: Running the check disk command (chkdsk /b /f /v /scan c:) from Admin Command Prompt Running Windows Memory Diagnostic Disabling Superfetch and Windows Search from services.msc Running "Fix problems with Windows Update" from Control Panel -- Troubleshooting Updating and rolling back the graphics driver (Intel HD 4000) Disabling "Use hardware acceleration when available" from Chrome settings Disabling Intel Rapid Storage Technology Running the SFC /SCANNOW command as recommended here Running a quick scan & a full scan from Windows Defender (no threats found) Taking the hard drive out and putting it back Refreshing the computer, from the Update and recovery -- Recovery option in Windows settings NONE of the above worked for me. I was about to give up but then noticed that one of the main culprits of the disk usage spike, as shown in the "Disk Activity" section of the Resource Monitor, was C:\System (pagefile.sys). I googled around and found that one of the recommended solutions was to disable pagefile. I then went to **Control Panel -- System and Security -- System -- Advanced system settings -- Advanced tab -- Performance settings -- Advanced tab -- "Change" under Virtual memory and discovered that the number for "Currently allocated" at the bottom was 1280MB, although the number for "Recommended" was 4533MB. I immediately changed it to 4533MB and checked my family members' computers to see what the numbers were like. All of theirs had a currently allocated space that was only slightly smaller than the recommended space. See screenshot below: This might fix the problem. I'll have to wait a couple more days.But if it doesn't, what in the world should I do next? I'm guessing the hard drive isn't failing because This computer is less than 2 years old; and Speccy says that the status of the HDD is good. Update 5/27/2014 The "4533MB" solution did not work. I had to reboot the computer about 30 minutes ago because the disk usage again hit 100%. When I opened Resource Monitor the C:\System (pagefile.sys) again was shown to be the culprit. I have now disabled pagefile entirely via the same window shown above in the screenshot. The number for "currently allocated" is now 0MB. Will update again in a couple days, or if the problem occurs again, whichever comes sooner.

    Read the article

  • My 2009 MacBook Logic board failed - options to proceed and how difficult?

    - by user181061
    Scannerz just gave my MacBook logic board a big fat F! I upgraded from Snow Leopard to Mountain Lion about 3 weeks ago. The system was running short of memory so I upgraded it. The system was running fine for about 2 weeks. Yesterday the thing started acting erratic. A lot of spinning beach balls, delays, and then some errors saying files couldn't be read to or from the drive. I figured the drive was going because the system is over 3 years old. I ran Scannerz on it and it indicated a lot of errors and irregularities. I rescanned it in cursory mode, and none of them were repeatable, just showing up all over the place in different regions of the scan. I went through the docs and they implied either an I/O cable was bad, a connection was damaged, or the logic board was bad. I tossed on my backup of Snow Leopard that I cloned from the original hard drive because I figured Mountain Lion was to blame and booted from the USB drive with the clone on it. It wasn't. I performed scans on every single port, and errors and irregularities that couldn't be repeated were showing up on every single one of them. I then, for kicks, put a CD into the CD player. Scannerz doesn't test optical drives but I figured surely that will work. No it won't. More spinning beach balls and messages telling me it can't be read. It was working fine 3 days ago. I know a lot of people don't like MacBook's, but mine's been great, at least until now. It was working great even with Mountain Lion after the upgrade. The system is a mid-2009 MacBook. In my opinion, it's a complete waste to toss this system. The display is too good, the keyboard works great, and it still looks good, plus this type of MacBook still uses the FireWire 400 port and I use that for Time Machine backups. I've tried reseating the RAM, it didn't do anything. I shut the system down and put in the old RAM, booted to Snow Leopard, and the problems persist. Here are my questions: The Scannerz documentation somewhere said something about the Airport card not being seated properly, but when I go to iFixit, it's apparent, at least I think it's apparent, that this isn't a slot type Airport card that the user can easily install or remove. If the cables or connections to the Airport card are bad, could they be causing this problem. How about any other connections that can be intermittent, failing or erratic? Any type of resets that I could possibly do to get rid of this? For any of those that have replaced a logic board on a MacBook, if this really is the culprit, are there any "gotcha's" I need to be aware of? As an FYI, I replaced the hard drive on an old iBook @500MHz that I had a long time ago, and I replaced the drive on a 1.33GHz PowerBook about 6 years ago. You have to be careful, but using some of the info on web sites like iFixit it's not that hard. Time consuming, but not that hard. The Intel based MacBook's to me look like they're easier to service than either of those. I'm thinking about getting a unit off of eBay that matches mine but has something else wrong with it, like a busted display. I REFUSE to buy a new system. A guy at my office has a 2007 Mac Pro and he can't upgrade to Mountain Lion because his system is "obsoleted." That's ridiculous. If you pay nearly $7,500 for a system it shouldn't be trash just because Apple decides they don't have enough money (sorry for the soap box, but it's true, IMO!) Any input is appreciated.

    Read the article

  • Gaming blew fuse and causes funny smell: how to overcome?

    - by George Tomlinson
    I've been gaming for a while now. When playing certain games this PC goes into overdrive. The fan/fans start/s to sound like a jet engine it/they get/s so busy. Also I have smelt burning when this has happened. The fuse blew on the 4 socket adapter I was using recently. On the following thread someone said this could be due to the PSU not being strong enough to handle the load, in what it seems could be a related issue someone had, although the person who posted this question did say that blowing a fan on their PC stopped it crashing in that case: http://www.tomshardware.co.uk/answers/id-2047543/gtx-650-overheating-issue.html. This is exactly what they said: Your GPU isn't overheating. 70+ before it would shutdown and cause a restart. Make sure your PSU is strong enough to handle your new system at load and possibly run Memtest to check your RAM (although not BSOD'ing and just shutting down points to the PSU). This (the PSU part) makes more sense to me than it being to do with dust etc, since it seems a more plausible explanation of why the fuse blew. The PC has no problems except when playing certain games: i.e. TERA Rising and WoW with add-ons (I think WoW is ok as long as I don't have more than 1 add-on (Healers Have To Die)). I'm just wondering if anyone knows or can suggest what I might be able to do to be able to play these games without this problem occurring. The PC's spec is this: Display: NVIDIA GeForce GTX 650 8GB RAM (6 available) Processor: AMD FX (tm) - 8120 Eight-Core Processor - 3.1 GHz, 4 Cores, 8 Logical Processors I have read on another post that forcing vsync in the Nvidia Control Panel helped with what seems could be a similar problem, so I plan to see if that solves it, God permitting. EDIT: I tried the Vsync thing, and it seems the situation may have improved, although this may be due to something else: i.e. maybe the PC was working harder yesterday, due to just having downloaded a few things or lots of things running. I'm still noticing the funny smell when playing TERA. It's not so much burning: it's more like glue. The smell might have had a burning element to it in the past, but I think it's always had a glue element. EDIT 2: the PSU is an 'ATX Switching Power Supply', Model E-500ATX. Other info it gives on the PSU is 230V, Current 10A and Frequency 50-60Hz. It also has some other info which I can supply if necessary. Putting the PC plug in the wall socket instead of the power strip seems like it might have reduced the load on the PC quite a bit: I think it sounds less stressed. it has been off for a while whilst I took the side panel off though, so I'll wait to see what happens before getting too excited. EDIT 3: hmm. So here's the latest: just playing TERA. The fan's running quite fast again. Hard to tell whether switching to the wall socket has made a difference in terms of strain on the PC: I don't know if one would expect it to. Still seems like it might have helped though. Oh and there didn't seem to be much dust in the PC, although I didn't disconnect any components. I'm still getting the glue type smell. ASIDE: reminds me of someone on a PC near me at the library once who was actually sniffing glue right there in front of everyone while on the PC and he started talking about how he was sniffing glue. lol. That's no joke. EDIT 4: So the questions now are: Question 1: Is the smell something I should sort out? (If so, how might I do this?) Question 2: is it necessary to take any steps to prevent blowing another fuse (and if so which step/s?).

    Read the article

  • Distributed and/or Parallel SSIS processing

    - by Jeff
    Background: Our company hosts SaaS DSS applications, where clients provide us data Daily and/or Weekly, which we process & merge into their existing database. During business hours, load in the servers are pretty minimal as it's mostly users running simple pre-defined queries via the website, or running drill-through reports that mostly hit the SSAS OLAP cube. I manage the IT Operations Team, and so far this has presented an interesting "scaling" issue for us. For our daily-refreshed clients, the server is only "busy" for about 4-6 hrs at night. For our weekly-refresh clients, the server is only "busy" for maybe 8-10 hrs per week! We've done our best to use some simple methods of distributing the load by spreading the daily clients evenly among the servers such that we're not trying to process daily clients back-to-back over night. But long-term this scaling strategy creates two notable issues. First, it's going to consume a pretty immense amount of hardware that sits idle for large periods of time. Second, it takes significant Production Support over-head to basically "schedule" the ETL such that they don't over-lap, and move clients/schedules around if they out-grow the resources on a particular server or allocated time-slot. As the title would imply, one option we've tried is running multiple SSIS packages in parallel, but in most cases this has yielded VERY inconsistent results. The most common failures are DTExec, SQL, and SSAS fighting for physical memory and throwing out-of-memory errors, and ETLs running 3,4,5x longer than expected. So from my practical experience thus far, it seems like running multiple ETL packages on the same hardware isn't a good idea, but I can't be the first person that doesn't want to scale multiple ETLs around manual scheduling, and sequential processing. One option we've considered is virtualizing the servers, which obviously doesn't give you any additional resources, but moves the resource contention onto the hypervisor, which (from my experience) seems to manage simultaneous CPU/RAM/Disk I/O a little more gracefully than letting DTExec, SQL, and SSAS battle it out within Windows. Question to the forum: So my question to the forum is, are we missing something obvious here? Are there tools out there that can help manage running multiple SSIS packages on the same hardware? Would it be more "efficient" in terms of parallel execution if instead of running DTExec, SQL, and SSAS same machine (with every machine running that configuration), we run in pairs of three machines with SSIS running on one machine, SQL on another, and SSAS on a third? Obviously that would only make sense if we could process more than the three ETL we were able to process on the machine independently. Another option we've considered is completely re-architecting our SSIS package to have one "master" package for all clients that attempts to intelligently chose a server based off how "busy" it already is in terms of CPU/Memory/Disk utilization, but that would be a herculean effort, and seems like we're trying to reinvent something that you would think someone would sell (although I haven't had any luck finding it). So in summary, are we missing an obvious solution for this, and does anyone know if any tools (for free or for purchase, doesn't matter) that facilitate running multiple SSIS ETL packages in parallel and on multiple servers? (What I would call a "queue & node based" system, but that's not an official term). Ultimately VMWare's Distributed Resource Scheduler addresses this as you simply run a consistent number of clients per VM that you know will never conflict scheduleing-wise, then leave it up to VMWare to move the VMs around to balance out hardware usage. I'm definitely not against using VMWare to do this, but since we're a 100% Microsoft app stack, it seems like -someone- out there would have solved this problem at the application layer instead of the hypervisor layer by checking on resource utilization at the OS, SQL, SSAS levels. I'm open to ANY discussion on this, and remember no suggestion is too crazy or radical! :-) Right now, VMWare is the only option we've found to get away from "manually" balancing our resources, so any suggestions that leave us on a pure Microsoft stack would be great. Thanks guys, Jeff

    Read the article

  • Windows 7 is stuck at "Starting Windows" when I attempt to boot computer

    - by Eli
    Basically, whenever I turn on my computer, it gets to the Starting Windows phase and just stays there. The startup animation still plays, yet it gets nowhere. I have tried booting into safe mode, however it gets stuck at loading CLASSPNP.SYS. It then freezes there and doesn't continue booting. I have tried booting into recovery mode from the hard drive, and it freezes after displaying the background image. I have tried booting from a recovery CD, which works, and I was able to use system restore. However, using system restore did not fix it, and it still is stuck at the Starting Windows screen. I have tried booting a Windows CD (Windows 8 Retail Installer) to see if I could upgrade it to fix this issue, however that froze at a blank screen after it got past the boot logo. I have tried changing around the BIOS settings (including resetting), to no avail. I have tried re-plugging the internal PSU cables (this is a custom-built desktop), yet this has changed nothing. I can boot into a loopback Ubuntu install on the same drive, which works fine, other than the fact that it has issues with some of the USB ports and the network card. This system has worked fine for the past few months, completely stable, and nothing in the configuration has changed before this error started happening. Startup Repair on the Windows recovery CD doesn't find any issues. Unplugging my secondary hard drive or swapping around memory doesn't change anything. The hard drive itself is fine, it hasn't shown any signs of failure and once again, boots my other OS fine. If anyone could help with this, that would be great. I can't seem to find any possible solution to this. If it makes any difference, my system specs are as follows: AMD FX-8320 Gigabyte GA-970A-D3 4GB of DDR3 Radeon HD 6870 550w PSU I'd like to not have to reinstall Windows, for I have more than a terabyte of data that I would have to back up if that becomes the only option. EDIT: I have since tried the following: Tried the solution involving restoring files from RegBackup, which changed nothing. Tried testing everything with Hiren's boot CD, everything comes back as fine. Tried disabling everything unnecessary in the BIOS and unplugging everything unneeded, it still hangs. Tried swapping out every possible combination of RAM, it still has the same result. The RAM is not at fault it seems Tried every GPU I own (which is many!) and it still hangs at the exact same place. Tried minimizing the power consumption as much as possible, even using an old PCI graphics card. It still hangs at the same place in the same way, signifying that it's not the PSU at fault. Tried resetting the BIOS again, still nothing. Tried every possible combination of BIOS options, even downclocking everything, it still hangs in the same spot. Tried upgrading the BIOS from version FB to FD, which changed nothing. Based on this, I would conclude the motherboard to be at fault. Are there any other possibilities? I don't want to spend $150 for a new motherboard. EDIT 2: This is what it gets stuck at when I try to boot into safe mode: Note the slight graphical corruption at the top of the screen. No matter how I set up the system, this seems to be there. In addition, either it has stopped booting into safe mode now, or it takes upwards of 2+ hours, and I haven't left it running for that long.

    Read the article

  • How to install Delorme StreetAtlas (any version) + GPS inside VirtualBox VM?

    - by hotei
    When I try to run the install program I get a popup message that says the installer program is not a valid executable. Background: I want a GPS with maps on my laptop running Ubuntu 10.4LTS. Unfortunately I can't find a decent native Linux GPS solution with 50 state US street level coverage. I have VirtualBox VMs available for WinXP and Win7 (among others). The VMs work fine with MicroSoft Streets and Trips (2010) and MapNGo 5 (a very! old Delorme product), but while both these products support GPS, they don't support the Earthmate LT-40 USB GPS I already have. I've got pretty much every Delorme Street Atlas they've released in the last decade and none of them will install in a VM. Any help would be much appreciated. Clarification: I've installed the Delorme products from these CDs before and the disks are fine - as long as installation is done on a "physical" machine. Added: I've tried install from an iso as well as the real CD. No difference in result (setup.exe is not a valid executable) The WinXP is SP-2 (held back on purpose at this point - I'll snapshot and fork a later SP to test). The Win2K is SP-6a. Win7(32) VM is whatever updates came out last week. The USB setup is working at least to the point where the GPS device is active in the device list (has an x in the box). At this point its not relevant because the program that needs to read it can't even be installed. Added 9-19: Added wine as harrymc suggested. Initial result was no change. Here's wines error message. The file '/media/Disk1/setup.exe' is not marked as executable. If this was downloaded or copied form an untrusted source, it may be dangerous to run. For more details, read about the executable bit. At first I thought the execute bit was the problem, but looking at several other windows CDs I see that the execute bit is not set on their exe files (which install to VM without error). Still it was worth a shot so I copied the StreetAtlas 9 DVD to my hard disk, changed the on-disk exe files to have the execute bit set and tried to install again. This time the install via wine got me through the installation process. When I start the program it bombs immediately, so we haven't made much real progress so far. I very much prefer the VM solution to wine, so I'm going back to that for now. To recap the VM situation, using an updated XP with SP3 and all recommended hotfixes: StreetAtlas 2009 USA fails with "not marked as executable". StreetAtlas 2007 USA fails with "not marked as executable". StreetAtlas 9 (copyright 2001) fails with "not marked as executable". SteeetAtlas (copyright 1991) fails with "not marked as executable" Delorme Topo 4 (copyright 2002) fails with "not marked as executable". Just about ready to give up. So I switched from XP VM to Win7 VM and tried StreetAtlas 2009 again. This time it installs. Earthmate USB GPS works. WTH? I feel like the monkey who just wrote a line of Shakespear. I'm smiling because it worked, but I have no clue why. I'm awarding the bounty to harrymc because wine did give some useful insight into the problem and a +1 to goyiux as thanks for helping.

    Read the article

  • mySQL Optimization Suggestions

    - by Brian Schroeter
    I'm trying to optimize our mySQL configuration for our large Magento website. The reason I believe that mySQL needs to be configured further is because New Relic has shown that our SELECT queries are taking a long time (20,000+ ms) in some categories. I ran MySQLTuner 1.3.0 and got the following results... (Disclaimer: I restarted mySQL earlier after tweaking some settings, and so the results here may not be 100% accurate): >> MySQLTuner 1.3.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering [OK] Currently running supported MySQL version 5.5.37-35.0 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM [--] Data in MyISAM tables: 7G (Tables: 332) [--] Data in InnoDB tables: 213G (Tables: 8714) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [--] Data in MEMORY tables: 0B (Tables: 353) [!!] Total fragmented tables: 5492 -------- Security Recommendations ------------------------------------------- [!!] User '@host5.server1.autopartsnetwork.com' has no password set. [!!] User '@localhost' has no password set. [!!] User 'root@%' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 5h 3m 4s (5M q [317.443 qps], 42K conn, TX: 18B, RX: 2B) [--] Reads / Writes: 95% / 5% [--] Total buffers: 35.5G global + 184.5M per thread (1024 max threads) [!!] Maximum possible memory usage: 220.0G (174% of installed RAM) [OK] Slow queries: 0% (6K/5M) [OK] Highest usage of available connections: 5% (61/1024) [OK] Key buffer size / total MyISAM indexes: 512.0M/3.1G [OK] Key buffer hit rate: 100.0% (102M cached / 45K reads) [OK] Query cache efficiency: 66.9% (3M cached / 5M selects) [!!] Query cache prunes per day: 3486361 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 812K sorts) [!!] Joins performed without indexes: 1328 [OK] Temporary tables created on disk: 11% (126K on disk / 1M total) [OK] Thread cache hit rate: 99% (61 created / 42K connections) [!!] Table cache hit rate: 19% (9K open / 49K opened) [OK] Open file limit used: 2% (712/25K) [OK] Table locks acquired immediately: 100% (5M immediate / 5M locks) [!!] InnoDB buffer pool / data size: 32.0G/213.4G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increasing the query_cache size over 128M may reduce performance Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 512M) [see warning above] join_buffer_size (> 128.0M, or always use indexes with joins) table_cache (> 12288) innodb_buffer_pool_size (>= 213G) My my.cnf configuration is as follows... [client] port = 3306 [mysqld_safe] nice = 0 [mysqld] tmpdir = /var/lib/mysql/tmp user = mysql port = 3306 skip-external-locking character-set-server = utf8 collation-server = utf8_general_ci event_scheduler = 0 key_buffer = 512M max_allowed_packet = 64M thread_stack = 512K thread_cache_size = 512 sort_buffer_size = 24M read_buffer_size = 8M read_rnd_buffer_size = 24M join_buffer_size = 128M # for some nightly processes client sessions set the join buffer to 8 GB auto-increment-increment = 1 auto-increment-offset = 1 myisam-recover = BACKUP max_connections = 1024 # max connect errors artificially high to support behaviors of NetScaler monitors max_connect_errors = 999999 concurrent_insert = 2 connect_timeout = 5 wait_timeout = 180 net_read_timeout = 120 net_write_timeout = 120 back_log = 128 # this table_open_cache might be too low because of MySQL bugs #16244691 and #65384) table_open_cache = 12288 tmp_table_size = 512M max_heap_table_size = 512M bulk_insert_buffer_size = 512M open-files-limit = 8192 open-files = 1024 query_cache_type = 1 # large query limit supports SOAP and REST API integrations query_cache_limit = 4M # larger than 512 MB query cache size is problematic; this is typically ~60% full query_cache_size = 512M # set to true on read slaves read_only = false slow_query_log_file = /var/log/mysql/slow.log slow_query_log = 0 long_query_time = 0.2 expire_logs_days = 10 max_binlog_size = 1024M binlog_cache_size = 32K sync_binlog = 0 # SSD RAID10 technically has a write capacity of 10000 IOPS innodb_io_capacity = 400 innodb_file_per_table innodb_table_locks = true innodb_lock_wait_timeout = 30 # These servers have 80 CPU threads; match 1:1 innodb_thread_concurrency = 48 innodb_commit_concurrency = 2 innodb_support_xa = true innodb_buffer_pool_size = 32G innodb_file_per_table innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 2G skip-federated [mysqldump] quick quote-names single-transaction max_allowed_packet = 64M I have a monster of a server here to power our site because our catalog is very large (300,000 simple SKUs), and I'm just wondering if I'm missing anything that I can configure further. :-) Thanks!

    Read the article

  • How to save a ntfs partition which suddenly became empty

    - by SteveO
    One ntfs partition of my laptop was suddenly wiped out without any notice to me, when I rebooted from Windows 7 to Ubuntu 12.04 today. I am in need of help to save my files on that partition, which are important and unfortunately haven't been backed up yet. My laptop has two operating systems: Windows 7 and Ubuntu 12.04. with a ntfs partition shared between the two operating systems for storing some data files (109GB, about 97%of which has been used). I have almost always been using Ubuntu, but today I happened to have to work under Windows. Following is a record of what happened in the time order, numbering according to which operating system I was in at each stage. When I started into Windows 7, right before being able to log in, it took a while and two reboots to configure the Windows. I thought it was normal, since last time when I was using Windows two weeks ago, it took very long and several reboots to update Windows, since the last time I used Windows before then was in November last year. Then after finally being able to log in Windows 7, I installed Libre Office, MathType (I got it from http://dl.portablesoft.org/down/?id=2515, which I originally thought was a trial version, but later I learned was a cracked version and felt wrong. I made a copy of it at dropbox http://dl.dropbox.com/u/13029929/MathType_6.8_PortableSoft.rar, not for distributing it but to list it there just in case it will help to identify the problem), and MikTex. I then edited some .doc files in the ntfs partition under both Microsoft Office with MathType, and Libre Office. When I finished working under Windows and rebooted into Ubuntu, Ubuntu did some filesystem checking and reported that the ntfs partition was not able to be mounted. Then I rebooted again into Windows, and found that the ntfs partition had been emptied, i.e. all the data files were gone, and only one system file bootsqm.dat and one system directory System Volume Information were there, with their last updated time being the time when I first rebooted from Windows to Ubuntu (in fact, it is 4 hours in advanced than the actual time of that rebooting , see immediately below) Also I noticed that the time shown by Windows is not correct for my time zone (UTC-05:00) Eastern Time (US & Canada)), which is 4 hours in advance than the correct time (my current time is 3am, but the computer shows 7am). Same things happened when I rebooted into Ubuntu again: the ntfs has been emptied and left with only one Windows system file bootsqm.dat and one Windows system directory System Volume Information. the time shown by Ubuntu is 4 hours in advance than the correct time. I wonder what I can do to retrieve my data files back on the ntfs partition? If I am not able to do it myself, will some professionals be able to help me out? Thanks a lot! PS: I didn't think I did any thing that required emptying that partition. But there were quite some works I did during that stage right before the reboot from Windows to Ubuntu when the problem occured. Did I make any mis-operation?

    Read the article

  • Exchange Server 2007 Send and Receive Connectors

    - by Mistiry
    I have gotten awesome advice from users on here for getting Exchange on Windows SBS 2008 set up. I think this is the final piece and I'm ready for roll-out! I need to set up Exchange so that it RECEIVES mail from our existing mail server as a Forward [aliases on the existing mail server to forward mail from [email protected] to [email protected]] (not using the POP3 Connector), and SENDS mail through that server as well (sends from [email protected] to [email protected] and then out to the world, showing in the headers as from [email protected] or at absolute least have the reply-to set as this). Alternatively, as long as the .net email address doesn't show in the From and replies are directed to the .com account, email can go from Exchange to the outside world without directing through the existing mail server. External Domain: domain.com Internal Domain: domain.local Internet Domain Name Set in SBS Console: domain.net When I go to http://remote.domain.net I get the Remote Web Workspace, and can login to both Sharepoint and OWA. I can send an email from OWA to a GMail account. I receive it from [email protected], which is an alias of [email protected]. I cannot, however, send an email from OWA to ANY domain.com email addresses. I am also not receiving any email to this Exchange account (except for NDRs). When I try sending an email to a domain.com account, here is the error (I had to replace all < and with { and }): Delivery has failed to these recipients or distribution lists: [email protected] The recipient's e-mail address was not found in the recipient's e-mail system. Microsoft Exchange will not try to redeliver this message for you. Please check the e-mail address and try resending this message, or provide the following diagnostic text to your system administrator. Generating server: IFEXCHANGE.domain.local [email protected] #550 5.1.1 RESOLVER.ADR.RecipNotFound; not found ## Original message headers: Received: from IFEXCHANGE.domain.local ([fe80::4d34:abc5:f7fd:e51a]) by IFEXCHANGE.domain.local ([fe80::4d34:abc5:f7fd:e51a%10]) with mapi; Tue, 17 Aug 2010 14:14:14 -0400 Content-Type: application/ms-tnef; name="winmail.dat" Content-Transfer-Encoding: binary From: John Doe {[email protected]} To: "[email protected]" {[email protected]} Date: Tue, 17 Aug 2010 14:14:12 -0400 Subject: asdf Thread-Topic: asdf Thread-Index: AQHLPjf+h6hA5MJ1JUu1WS4I4CiWeA== Message-ID: {E4E10393768D784D8760A51938BA456A029934BA30@IFEXCHANGE.domain.local} Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: {E4E10393768D784D8760A51938BA456A029934BA30@IFEXCHANGE.domain.local} MIME-Version: 1.0 I hope I explained the situation well enough for someone to be able to explain to me what I'm missing. If I could, I'd be putting a 10K bounty, but unfortunately I've got only 74 reputation (hey, I'm a newbie here!). I'm pretty sure the obvious "RecipNotFound" error is why its not working, my question is how to resolve this. The email account exists, it receives mail just fine, yet when I send it from the Exchange server it fails. EDIT In OC-Hub Transport, the Email Address Policies has 2 entries. "Windows SBS Email Address Policy" is set up to: Include All Recipient Types, no conditions, and SMTP %[email protected]. "Default Policy" set to: Include All Recipient Types, no conditions, and SMTP @domain.net. Three Authoritative Accepted domains domain.com domain.local (Default) domain.net Remote Domains tab has two entries. Default with domain * Windows SBS Company Web Domain with domain companyweb.

    Read the article

  • I am getting a 400 Bad Request error when using Nginx and PHP-FPM, why?

    - by Bob
    I am trying to run a website (that requires PHP - it technically doesn't require MySQL at this time, but it may sometime in the near future as I continue developing it, so I went ahead and installed that as well) using nginx 1.2.4 and PHP-FPM 5.3.3 on Ubuntu 12.04.1 LTS. As far as I know, I haven't done anything wrong, but clearly something is not quite right - I seem to be getting a 400 Bad Request error whenever I try to browse to my website. I've been mostly following one guide, and I've done more or less everything it recommends, except for not setting up PHP-FPM to use a Unix Socket and I used service as opposed to /etc/init.d/ when starting/stopping nginx, PHP, and MySQL. Anyways, here are my relevant configuration files (I have only censored personal/sensitive details, like my domain name - which contains my real name): /etc/nginx/nginx.conf user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 15; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/sites-enabled/subdomain.mydomain.net server { listen 80; # listen for IPv4 listen [::]:80; # listen for IPv6 server_name www.subdomain.mydomain.net subdomain.mydomain.net; access_log /srv/www/subdomain.mydomain.net/logs/access.log; error_log /srv/www/subdomain.mydomain.net/logs/error.log; location / { root /srv/www/subdomain.mydomain.net/public; index index.php; } location ~ \.php$ { try_files $uri =400; include fastcgi_params; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/subdomain.mydomain.net/public$fastcgi_script_name; } } All the directories listed in the configuration files above are correct on my server (to the extent of my knowledge). I have not included /etc/php5/fpm/pool.d/www.conf or /etc/php5/fpm/php.ini in this post as they're rather long, but I have posted them on Pastebin: http://pastebin.com/ensErJD8 and http://pastebin.com/T23dt7vM, respectively. Although, the only thing I've changed in either of the two files was in php.ini, where I set expose_php to off so as to hide the .php file extension from users. What can I do to resolve my issue? Please let me know if I need to supply any additional details.

    Read the article

  • Connecting PC to TV via HDMI/DVI: Windows XP doesn't allow the appropriate screen resolution

    - by Jørgen
    I have a computer that is connected to the living room TV (a Panasonic) via HDMI. There is no other monitor connected. My problem is that the computer, which is running Windows XP, does not allow me to set the proper resolution for the TV. Both the graphics adapter and the TV should support the 1280x720 resolution, but it cannot be selected - the only available options are 1280x600 and 800x600, both in the "native" Windows dialog box and the custom Intel graphics options dialog box. Do anyone have a suggestion for a solution for this? Things I've thought of: Setting the resolution directly in the registry (where?) Installing some "custom" monitor driver (the TV manufacturer does not appear to provide any, currently the "generic" one is used) Details on the setup: Connection: DVI output on the computer via a passive DVI-HDMI adapter to the HDMI input on the TV, audio is run on a separate link, the TV is able to combine video and audio without any problem, the problem is there regardless of whether or not the audio is connected. The connection is several meters long through some walls, for this reason using a VGA cable instead is not an option. Note that the report explicitly says that the TV supports 1280x720. Still, I am not allowed to select it in Graphics Options, only 1280x600 and 800x600 is available. For 800x600, there's a lot of black around the edges; for 1280x600, the screen is "zoomed" so the edges of the monitor image (like the taskbar) is not visible. Other: The computer is running Windows XP. More recent versions of Windows are not an option (I have no licence). Linux is probably not an option (some of the video streaming sites I plan to use do not support it, I think) I wrote the rest of the details below. Thanks for any help!! TV: Panasonic TX-L32X10Y, European version; a 720p 32" quite "regular" LCD TV. Allowed resolutions according to manual: Signal name: 640x480 @60HZ Horizontal frequency: 31.47 kHz Vertical frequency: 60Hz Signal name: 750/720) /60p Horizontal frequency: 45.00 kHz Vertical frequency: 60Hz Signal name: 1,125 (1,080) / 60p Horizontal frequency: 67.50 kHz Vertical frequency: 60Hz (this is exactly how the manual presents it. PC via D-SUB (VGA cable) and "regular" HDMI have more alternatives.) Messing with the "zoom" settings on the TV does not affect the available resolution options on the computer. Computer: The following is a printout from one of the graphics adapter option pages. I think it covers most of it. The computer is a Dell. INTEL(R) EXTREME GRAPHICS 2 REPORT Report Date: 04/17/2011 Report Time[hr:mm:ss]: 20:18:02 Driver Version: 6.14.10.4396 Operating System: Windows XP* Professional, Service Pack 3 (5.1.2600) Default Language: English DirectX* Version: 9.0 Physical Memory: 1021 MB Minimum Graphics Memory: 1 MB Maximum Graphics Memory: 96 MB Graphics Memory in Use: 6 MB Processor: x86 Processor Speed: 2593 MHZ Vendor ID: 8086 Device ID: 2572 Device Revision: 02 * Accelerator Information * Accelerator in Use: Intel(R) 82865G Graphics Controller Video BIOS: 2972 Current Graphics Mode: 1280 by 600 True Color (60 Hz) * Devices Connected to the Graphics Accelerator * Active Digital Displays: 1 * Digital Display * Monitor Name: Plug and Play Monitor Display Type: Digital Gamma Value: 2.20 DDC2 Protocol: Supported Maximum Image Size: Horizontal: Not Available Vertical: Not Available Monitor Supported Modes: 1280 by 720 (50 Hz) 1280 by 720 (60 Hz) Display Power Management Support: Standby Mode: Not Supported Suspend Mode: Not Supported Active Off Mode: Not Supported (disclaimer: this question was also asked at the Wikipedia Reference Desk some time ago and might show up in a Google search. I got no useful answers there.)

    Read the article

  • What are good design practices when working with Entity Framework

    - by AD
    This will apply mostly for an asp.net application where the data is not accessed via soa. Meaning that you get access to the objects loaded from the framework, not Transfer Objects, although some recommendation still apply. This is a community post, so please add to it as you see fit. Applies to: Entity Framework 1.0 shipped with Visual Studio 2008 sp1. Why pick EF in the first place? Considering it is a young technology with plenty of problems (see below), it may be a hard sell to get on the EF bandwagon for your project. However, it is the technology Microsoft is pushing (at the expense of Linq2Sql, which is a subset of EF). In addition, you may not be satisfied with NHibernate or other solutions out there. Whatever the reasons, there are people out there (including me) working with EF and life is not bad.make you think. EF and inheritance The first big subject is inheritance. EF does support mapping for inherited classes that are persisted in 2 ways: table per class and table the hierarchy. The modeling is easy and there are no programming issues with that part. (The following applies to table per class model as I don't have experience with table per hierarchy, which is, anyway, limited.) The real problem comes when you are trying to run queries that include one or many objects that are part of an inheritance tree: the generated sql is incredibly awful, takes a long time to get parsed by the EF and takes a long time to execute as well. This is a real show stopper. Enough that EF should probably not be used with inheritance or as little as possible. Here is an example of how bad it was. My EF model had ~30 classes, ~10 of which were part of an inheritance tree. On running a query to get one item from the Base class, something as simple as Base.Get(id), the generated SQL was over 50,000 characters. Then when you are trying to return some Associations, it degenerates even more, going as far as throwing SQL exceptions about not being able to query more than 256 tables at once. Ok, this is bad, EF concept is to allow you to create your object structure without (or with as little as possible) consideration on the actual database implementation of your table. It completely fails at this. So, recommendations? Avoid inheritance if you can, the performance will be so much better. Use it sparingly where you have to. In my opinion, this makes EF a glorified sql-generation tool for querying, but there are still advantages to using it. And ways to implement mechanism that are similar to inheritance. Bypassing inheritance with Interfaces First thing to know with trying to get some kind of inheritance going with EF is that you cannot assign a non-EF-modeled class a base class. Don't even try it, it will get overwritten by the modeler. So what to do? You can use interfaces to enforce that classes implement some functionality. For example here is a IEntity interface that allow you to define Associations between EF entities where you don't know at design time what the type of the entity would be. public enum EntityTypes{ Unknown = -1, Dog = 0, Cat } public interface IEntity { int EntityID { get; } string Name { get; } Type EntityType { get; } } public partial class Dog : IEntity { // implement EntityID and Name which could actually be fields // from your EF model Type EntityType{ get{ return EntityTypes.Dog; } } } Using this IEntity, you can then work with undefined associations in other classes // lets take a class that you defined in your model. // that class has a mapping to the columns: PetID, PetType public partial class Person { public IEntity GetPet() { return IEntityController.Get(PetID,PetType); } } which makes use of some extension functions: public class IEntityController { static public IEntity Get(int id, EntityTypes type) { switch (type) { case EntityTypes.Dog: return Dog.Get(id); case EntityTypes.Cat: return Cat.Get(id); default: throw new Exception("Invalid EntityType"); } } } Not as neat as having plain inheritance, particularly considering you have to store the PetType in an extra database field, but considering the performance gains, I would not look back. It also cannot model one-to-many, many-to-many relationship, but with creative uses of 'Union' it could be made to work. Finally, it creates the side effet of loading data in a property/function of the object, which you need to be careful about. Using a clear naming convention like GetXYZ() helps in that regards. Compiled Queries Entity Framework performance is not as good as direct database access with ADO (obviously) or Linq2SQL. There are ways to improve it however, one of which is compiling your queries. The performance of a compiled query is similar to Linq2Sql. What is a compiled query? It is simply a query for which you tell the framework to keep the parsed tree in memory so it doesn't need to be regenerated the next time you run it. So the next run, you will save the time it takes to parse the tree. Do not discount that as it is a very costly operation that gets even worse with more complex queries. There are 2 ways to compile a query: creating an ObjectQuery with EntitySQL and using CompiledQuery.Compile() function. (Note that by using an EntityDataSource in your page, you will in fact be using ObjectQuery with EntitySQL, so that gets compiled and cached). An aside here in case you don't know what EntitySQL is. It is a string-based way of writing queries against the EF. Here is an example: "select value dog from Entities.DogSet as dog where dog.ID = @ID". The syntax is pretty similar to SQL syntax. You can also do pretty complex object manipulation, which is well explained [here][1]. Ok, so here is how to do it using ObjectQuery< string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); The first time you run this query, the framework will generate the expression tree and keep it in memory. So the next time it gets executed, you will save on that costly step. In that example EnablePlanCaching = true, which is unnecessary since that is the default option. The other way to compile a query for later use is the CompiledQuery.Compile method. This uses a delegate: static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => ctx.DogSet.FirstOrDefault(it => it.ID == id)); or using linq static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet where dog.ID == id select dog).FirstOrDefault()); to call the query: query_GetDog.Invoke( YourContext, id ); The advantage of CompiledQuery is that the syntax of your query is checked at compile time, where as EntitySQL is not. However, there are other consideration... Includes Lets say you want to have the data for the dog owner to be returned by the query to avoid making 2 calls to the database. Easy to do, right? EntitySQL string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)).Include("Owner"); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); CompiledQuery static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet.Include("Owner") where dog.ID == id select dog).FirstOrDefault()); Now, what if you want to have the Include parametrized? What I mean is that you want to have a single Get() function that is called from different pages that care about different relationships for the dog. One cares about the Owner, another about his FavoriteFood, another about his FavotireToy and so on. Basicly, you want to tell the query which associations to load. It is easy to do with EntitySQL public Dog Get(int id, string include) { string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)) .IncludeMany(include); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); } The include simply uses the passed string. Easy enough. Note that it is possible to improve on the Include(string) function (that accepts only a single path) with an IncludeMany(string) that will let you pass a string of comma-separated associations to load. Look further in the extension section for this function. If we try to do it with CompiledQuery however, we run into numerous problems: The obvious static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.Include(include) where dog.ID == id select dog).FirstOrDefault()); will choke when called with: query_GetDog.Invoke( YourContext, id, "Owner,FavoriteFood" ); Because, as mentionned above, Include() only wants to see a single path in the string and here we are giving it 2: "Owner" and "FavoriteFood" (which is not to be confused with "Owner.FavoriteFood"!). Then, let's use IncludeMany(), which is an extension function static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.IncludeMany(include) where dog.ID == id select dog).FirstOrDefault()); Wrong again, this time it is because the EF cannot parse IncludeMany because it is not part of the functions that is recognizes: it is an extension. Ok, so you want to pass an arbitrary number of paths to your function and Includes() only takes a single one. What to do? You could decide that you will never ever need more than, say 20 Includes, and pass each separated strings in a struct to CompiledQuery. But now the query looks like this: from dog in ctx.DogSet.Include(include1).Include(include2).Include(include3) .Include(include4).Include(include5).Include(include6) .[...].Include(include19).Include(include20) where dog.ID == id select dog which is awful as well. Ok, then, but wait a minute. Can't we return an ObjectQuery< with CompiledQuery? Then set the includes on that? Well, that what I would have thought so as well: static readonly Func<Entities, int, ObjectQuery<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, ObjectQuery<Dog>>((ctx, id) => (ObjectQuery<Dog>)(from dog in ctx.DogSet where dog.ID == id select dog)); public Dog GetDog( int id, string include ) { ObjectQuery<Dog> oQuery = query_GetDog(id); oQuery = oQuery.IncludeMany(include); return oQuery.FirstOrDefault; } That should have worked, except that when you call IncludeMany (or Include, Where, OrderBy...) you invalidate the cached compiled query because it is an entirely new one now! So, the expression tree needs to be reparsed and you get that performance hit again. So what is the solution? You simply cannot use CompiledQueries with parametrized Includes. Use EntitySQL instead. This doesn't mean that there aren't uses for CompiledQueries. It is great for localized queries that will always be called in the same context. Ideally CompiledQuery should always be used because the syntax is checked at compile time, but due to limitation, that's not possible. An example of use would be: you may want to have a page that queries which two dogs have the same favorite food, which is a bit narrow for a BusinessLayer function, so you put it in your page and know exactly what type of includes are required. Passing more than 3 parameters to a CompiledQuery Func is limited to 5 parameters, of which the last one is the return type and the first one is your Entities object from the model. So that leaves you with 3 parameters. A pitance, but it can be improved on very easily. public struct MyParams { public string param1; public int param2; public DateTime param3; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where dog.Age == myParams.param2 && dog.Name == myParams.param1 and dog.BirthDate > myParams.param3 select dog); public List<Dog> GetSomeDogs( int age, string Name, DateTime birthDate ) { MyParams myParams = new MyParams(); myParams.param1 = name; myParams.param2 = age; myParams.param3 = birthDate; return query_GetDog(YourContext,myParams).ToList(); } Return Types (this does not apply to EntitySQL queries as they aren't compiled at the same time during execution as the CompiledQuery method) Working with Linq, you usually don't force the execution of the query until the very last moment, in case some other functions downstream wants to change the query in some way: static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public IEnumerable<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name); } public void DataBindStuff() { IEnumerable<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } What is going to happen here? By still playing with the original ObjectQuery (that is the actual return type of the Linq statement, which implements IEnumerable), it will invalidate the compiled query and be force to re-parse. So, the rule of thumb is to return a List< of objects instead. static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public List<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name).ToList(); //<== change here } public void DataBindStuff() { List<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } When you call ToList(), the query gets executed as per the compiled query and then, later, the OrderBy is executed against the objects in memory. It may be a little bit slower, but I'm not even sure. One sure thing is that you have no worries about mis-handling the ObjectQuery and invalidating the compiled query plan. Once again, that is not a blanket statement. ToList() is a defensive programming trick, but if you have a valid reason not to use ToList(), go ahead. There are many cases in which you would want to refine the query before executing it. Performance What is the performance impact of compiling a query? It can actually be fairly large. A rule of thumb is that compiling and caching the query for reuse takes at least double the time of simply executing it without caching. For complex queries (read inherirante), I have seen upwards to 10 seconds. So, the first time a pre-compiled query gets called, you get a performance hit. After that first hit, performance is noticeably better than the same non-pre-compiled query. Practically the same as Linq2Sql When you load a page with pre-compiled queries the first time you will get a hit. It will load in maybe 5-15 seconds (obviously more than one pre-compiled queries will end up being called), while subsequent loads will take less than 300ms. Dramatic difference, and it is up to you to decide if it is ok for your first user to take a hit or you want a script to call your pages to force a compilation of the queries. Can this query be cached? { Dog dog = from dog in YourContext.DogSet where dog.ID == id select dog; } No, ad-hoc Linq queries are not cached and you will incur the cost of generating the tree every single time you call it. Parametrized Queries Most search capabilities involve heavily parametrized queries. There are even libraries available that will let you build a parametrized query out of lamba expressions. The problem is that you cannot use pre-compiled queries with those. One way around that is to map out all the possible criteria in the query and flag which one you want to use: public struct MyParams { public string name; public bool checkName; public int age; public bool checkAge; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where (myParams.checkAge == true && dog.Age == myParams.age) && (myParams.checkName == true && dog.Name == myParams.name ) select dog); protected List<Dog> GetSomeDogs() { MyParams myParams = new MyParams(); myParams.name = "Bud"; myParams.checkName = true; myParams.age = 0; myParams.checkAge = false; return query_GetDog(YourContext,myParams).ToList(); } The advantage here is that you get all the benifits of a pre-compiled quert. The disadvantages are that you most likely will end up with a where clause that is pretty difficult to maintain, that you will incur a bigger penalty for pre-compiling the query and that each query you run is not as efficient as it could be (particularly with joins thrown in). Another way is to build an EntitySQL query piece by piece, like we all did with SQL. protected List<Dod> GetSomeDogs( string name, int age) { string query = "select value dog from Entities.DogSet where 1 = 1 "; if( !String.IsNullOrEmpty(name) ) query = query + " and dog.Name == @Name "; if( age > 0 ) query = query + " and dog.Age == @Age "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); if( !String.IsNullOrEmpty(name) ) oQuery.Parameters.Add( new ObjectParameter( "Name", name ) ); if( age > 0 ) oQuery.Parameters.Add( new ObjectParameter( "Age", age ) ); return oQuery.ToList(); } Here the problems are: - there is no syntax checking during compilation - each different combination of parameters generate a different query which will need to be pre-compiled when it is first run. In this case, there are only 4 different possible queries (no params, age-only, name-only and both params), but you can see that there can be way more with a normal world search. - Noone likes to concatenate strings! Another option is to query a large subset of the data and then narrow it down in memory. This is particularly useful if you are working with a definite subset of the data, like all the dogs in a city. You know there are a lot but you also know there aren't that many... so your CityDog search page can load all the dogs for the city in memory, which is a single pre-compiled query and then refine the results protected List<Dod> GetSomeDogs( string name, int age, string city) { string query = "select value dog from Entities.DogSet where dog.Owner.Address.City == @City "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); oQuery.Parameters.Add( new ObjectParameter( "City", city ) ); List<Dog> dogs = oQuery.ToList(); if( !String.IsNullOrEmpty(name) ) dogs = dogs.Where( it => it.Name == name ); if( age > 0 ) dogs = dogs.Where( it => it.Age == age ); return dogs; } It is particularly useful when you start displaying all the data then allow for filtering. Problems: - Could lead to serious data transfer if you are not careful about your subset. - You can only filter on the data that you returned. It means that if you don't return the Dog.Owner association, you will not be able to filter on the Dog.Owner.Name So what is the best solution? There isn't any. You need to pick the solution that works best for you and your problem: - Use lambda-based query building when you don't care about pre-compiling your queries. - Use fully-defined pre-compiled Linq query when your object structure is not too complex. - Use EntitySQL/string concatenation when the structure could be complex and when the possible number of different resulting queries are small (which means fewer pre-compilation hits). - Use in-memory filtering when you are working with a smallish subset of the data or when you had to fetch all of the data on the data at first anyway (if the performance is fine with all the data, then filtering in memory will not cause any time to be spent in the db). Singleton access The best way to deal with your context and entities accross all your pages is to use the singleton pattern: public sealed class YourContext { private const string instanceKey = "On3GoModelKey"; YourContext(){} public static YourEntities Instance { get { HttpContext context = HttpContext.Current; if( context == null ) return Nested.instance; if (context.Items[instanceKey] == null) { On3GoEntities entity = new On3GoEntities(); context.Items[instanceKey] = entity; } return (YourEntities)context.Items[instanceKey]; } } class Nested { // Explicit static constructor to tell C# compiler // not to mark type as beforefieldinit static Nested() { } internal static readonly YourEntities instance = new YourEntities(); } } NoTracking, is it worth it? When executing a query, you can tell the framework to track the objects it will return or not. What does it mean? With tracking enabled (the default option), the framework will track what is going on with the object (has it been modified? Created? Deleted?) and will also link objects together, when further queries are made from the database, which is what is of interest here. For example, lets assume that Dog with ID == 2 has an owner which ID == 10. Dog dog = (from dog in YourContext.DogSet where dog.ID == 2 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Person owner = (from o in YourContext.PersonSet where o.ID == 10 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == true; If we were to do the same with no tracking, the result would be different. ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog = oDogQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>) (from o in YourContext.PersonSet where o.ID == 10 select o); oPersonQuery.MergeOption = MergeOption.NoTracking; Owner owner = oPersonQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Tracking is very useful and in a perfect world without performance issue, it would always be on. But in this world, there is a price for it, in terms of performance. So, should you use NoTracking to speed things up? It depends on what you are planning to use the data for. Is there any chance that the data your query with NoTracking can be used to make update/insert/delete in the database? If so, don't use NoTracking because associations are not tracked and will causes exceptions to be thrown. In a page where there are absolutly no updates to the database, you can use NoTracking. Mixing tracking and NoTracking is possible, but it requires you to be extra careful with updates/inserts/deletes. The problem is that if you mix then you risk having the framework trying to Attach() a NoTracking object to the context where another copy of the same object exist with tracking on. Basicly, what I am saying is that Dog dog1 = (from dog in YourContext.DogSet where dog.ID == 2).FirstOrDefault(); ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog2 = oDogQuery.FirstOrDefault(); dog1 and dog2 are 2 different objects, one tracked and one not. Using the detached object in an update/insert will force an Attach() that will say "Wait a minute, I do already have an object here with the same database key. Fail". And when you Attach() one object, all of its hierarchy gets attached as well, causing problems everywhere. Be extra careful. How much faster is it with NoTracking It depends on the queries. Some are much more succeptible to tracking than other. I don't have a fast an easy rule for it, but it helps. So I should use NoTracking everywhere then? Not exactly. There are some advantages to tracking object. The first one is that the object is cached, so subsequent call for that object will not hit the database. That cache is only valid for the lifetime of the YourEntities object, which, if you use the singleton code above, is the same as the page lifetime. One page request == one YourEntity object. So for multiple calls for the same object, it will load only once per page request. (Other caching mechanism could extend that). What happens when you are using NoTracking and try to load the same object multiple times? The database will be queried each time, so there is an impact there. How often do/should you call for the same object during a single page request? As little as possible of course, but it does happens. Also remember the piece above about having the associations connected automatically for your? You don't have that with NoTracking, so if you load your data in multiple batches, you will not have a link to between them: ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>)(from dog in YourContext.DogSet select dog); oDogQuery.MergeOption = MergeOption.NoTracking; List<Dog> dogs = oDogQuery.ToList(); ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>)(from o in YourContext.PersonSet select o); oPersonQuery.MergeOption = MergeOption.NoTracking; List<Person> owners = oPersonQuery.ToList(); In this case, no dog will have its .Owner property set. Some things to keep in mind when you are trying to optimize the performance. No lazy loading, what am I to do? This can be seen as a blessing in disguise. Of course it is annoying to load everything manually. However, it decreases the number of calls to the db and forces you to think about when you should load data. The more you can load in one database call the better. That was always true, but it is enforced now with this 'feature' of EF. Of course, you can call if( !ObjectReference.IsLoaded ) ObjectReference.Load(); if you want to, but a better practice is to force the framework to load the objects you know you will need in one shot. This is where the discussion about parametrized Includes begins to make sense. Lets say you have you Dog object public class Dog { public Dog Get(int id) { return YourContext.DogSet.FirstOrDefault(it => it.ID == id ); } } This is the type of function you work with all the time. It gets called from all over the place and once you have that Dog object, you will do very different things to it in different functions. First, it should be pre-compiled, because you will call that very often. Second, each different pages will want to have access to a different subset of the Dog data. Some will want the Owner, some the FavoriteToy, etc. Of course, you could call Load() for each reference you need anytime you need one. But that will generate a call to the database each time. Bad idea. So instead, each page will ask for the data it wants to see when it first request for the Dog object: static public Dog Get(int id) { return GetDog(entity,"");} static public Dog Get(int id, string includePath) { string query = "select value o " + " from YourEntities.DogSet as o " +

    Read the article

  • Ajax Control Toolkit December 2013 Release

    - by Stephen.Walther
    Today, we released a new version of the Ajax Control Toolkit that contains several important bug fixes and new features. The new release contains a new Tabs control that has been entirely rewritten in jQuery. You can download the December 2013 release of the Ajax Control Toolkit at http://Ajax.CodePlex.com. Alternatively, you can install the latest version directly from NuGet: The Ajax Control Toolkit and jQuery The Ajax Control Toolkit now contains two controls written with jQuery: the ToggleButton control and the Tabs control.  The goal is to rewrite the Ajax Control Toolkit to use jQuery instead of the Microsoft Ajax Library gradually over time. The motivation for rewriting the controls in the Ajax Control Toolkit to use jQuery is to modernize the toolkit. We want to continue to accept new controls written for the Ajax Control Toolkit contributed by the community. The community wants to use jQuery. We want to make it easy for the community to submit bug fixes. The community understands jQuery. Using the Ajax Control Toolkit with a Website that Already uses jQuery But what if you are already using jQuery in your website?  Will adding the Ajax Control Toolkit to your website break your existing website?  No, and here is why. The Ajax Control Toolkit uses jQuery.noConflict() to avoid conflicting with an existing version of jQuery in a page.  The version of jQuery that the Ajax Control Toolkit uses is represented by a variable named actJQuery.  You can use actJQuery side-by-side with an existing version of jQuery in a page without conflict.Imagine, for example, that you add jQuery to an ASP.NET page using a <script> tag like this: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="WebForm1.aspx.cs" Inherits="TestACTDec2013.WebForm1" %> <!DOCTYPE html> <html > <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <script src="Scripts/jquery-2.0.3.min.js"></script> <ajaxToolkit:ToolkitScriptManager runat="server" /> <ajaxToolkit:TabContainer runat="server"> <ajaxToolkit:TabPanel runat="server"> <HeaderTemplate> Tab 1 </HeaderTemplate> <ContentTemplate> <h1>First Tab</h1> </ContentTemplate> </ajaxToolkit:TabPanel> <ajaxToolkit:TabPanel runat="server"> <HeaderTemplate> Tab 2 </HeaderTemplate> <ContentTemplate> <h1>Second Tab</h1> </ContentTemplate> </ajaxToolkit:TabPanel> </ajaxToolkit:TabContainer> </div> </form> </body> </html> The page above uses the Ajax Control Toolkit Tabs control (TabContainer and TabPanel controls).  The Tabs control uses the version of jQuery that is currently bundled with the Ajax Control Toolkit (jQuery version 1.9.1). The page above also includes a <script> tag that references jQuery version 2.0.3.  You might need that particular version of jQuery, for example, to use a particular jQuery plugin. The two versions of jQuery in the page do not create a conflict. This fact can be demonstrated by entering the following two commands in the JavaScript console window: actJQuery.fn.jquery $.fn.jquery Typing actJQuery.fn.jquery will display the version of jQuery used by the Ajax Control Toolkit and typing $.fn.jquery (or jQuery.fn.jquery) will show the version of jQuery used by other jQuery plugins in the page.      Preventing jQuery from Loading Twice So by default, the Ajax Control Toolkit will not conflict with any existing version of jQuery used in your application. However, this does mean that if you are already using jQuery in your application then jQuery will be loaded twice. For performance reasons, you might want to avoid loading the jQuery library twice. By taking advantage of the <remove> element in the AjaxControlToolkit.config file, you can prevent the Ajax Control Toolkit from loading its version of jQuery. <ajaxControlToolkit> <scripts> <remove name="jQuery.jQuery.js" /> </scripts> <controlBundles> <controlBundle> <control name="TabContainer" /> <control name="TabPanel" /> </controlBundle> </controlBundles> </ajaxControlToolkit> Be careful here:  the name of the script being removed – jQuery.jQuery.js – is case-sensitive. If you remove jQuery then it is your responsibility to add the exact same version of jQuery back into your application.  You can add jQuery back using a <script> tag like this: <script src="Scripts/jquery-1.9.1.min.js"></script>     Make sure that you add the <script> tag before the server-side <form> tag or the Ajax Control Toolkit won’t detect the presence of jQuery. Alternatively, you can use the ToolkitScriptManager like this: <ajaxToolkit:ToolkitScriptManager runat="server"> <Scripts> <asp:ScriptReference Name="jQuery.jQuery.js" /> </Scripts> </ajaxToolkit:ToolkitScriptManager> The Ajax Control Toolkit is tested against the particular version of jQuery that is bundled with the Ajax Control Toolkit. Currently, the Ajax Control Toolkit uses jQuery version 1.9.1. If you attempt to use a different version of jQuery with the Ajax Control Toolkit then you will get the exception jQuery 1.9.1 is required in your JavaScript console window: If you need to use a different version of jQuery in the same page as the Ajax Control Toolkit then you should not use the <remove> element. Instead, allow the Ajax Control Toolkit to load its version of jQuery side-by-side with the other version of jQuery. Lots of Bug Fixes As usual, we implemented several important bug fixes with this release. The bug fixes concerned the following three controls: Tabs control – In the course of rewriting the Tabs control to use jQuery, we fixed several bugs related to the Tabs control. AjaxFileUpload control – We resolved an issue concerning the AjaxFileUpload and the TMP directory. HTMLEditor control – We updated the HTMLEditor control to use the new Ajax Control Toolkit bundling and minification framework. Summary I would like to thank the Superexpert team for their hard work on this release. Many long hours of coding and testing went into making this release possible.

    Read the article

  • Dynamic JSON Parsing in .NET with JsonValue

    - by Rick Strahl
    So System.Json has been around for a while in Silverlight, but it's relatively new for the desktop .NET framework and now moving into the lime-light with the pending release of ASP.NET Web API which is bringing a ton of attention to server side JSON usage. The JsonValue, JsonObject and JsonArray objects are going to be pretty useful for Web API applications as they allow you dynamically create and parse JSON values without explicit .NET types to serialize from or into. But even more so I think JsonValue et al. are going to be very useful when consuming JSON APIs from various services. Yes I know C# is strongly typed, why in the world would you want to use dynamic values? So many times I've needed to retrieve a small morsel of information from a large service JSON response and rather than having to map the entire type structure of what that service returns, JsonValue actually allows me to cherry pick and only work with the values I'm interested in, without having to explicitly create everything up front. With JavaScriptSerializer or DataContractJsonSerializer you always need to have a strong type to de-serialize JSON data into. Wouldn't it be nice if no explicit type was required and you could just parse the JSON directly using a very easy to use object syntax? That's exactly what JsonValue, JsonObject and JsonArray accomplish using a JSON parser and some sweet use of dynamic sauce to make it easy to access in code. Creating JSON on the fly with JsonValue Let's start with creating JSON on the fly. It's super easy to create a dynamic object structure. JsonValue uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JsonValue:[TestMethod] public void JsonValueOutputTest() { // strong type instance var jsonObject = new JsonObject(); // dynamic expando instance you can add properties to dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1977; album.Songs = new JsonArray() as dynamic; dynamic song = new JsonObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JsonObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces proper JSON just as you would expect: {"AlbumName":"Dirty Deeds Done Dirt Cheap","Artist":"AC\/DC","YearReleased":1977,"Songs":[{"SongName":"Dirty Deeds Done Dirt Cheap","SongLength":"4:11"},{"SongName":"Love at First Feel","SongLength":"3:10"}]} The important thing about this code is that there's no explicitly type that is used for holding the values to serialize to JSON. I am essentially creating this value structure on the fly by adding properties and then serialize it to JSON. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JsonObject() to create a new object and immediately cast it to dynamic. JsonObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JsonValue/JsonObject these values are stored in pseudo collections of key value pairs that are exposed as properties through the DynamicObject functionality in .NET. The syntax gets a little tedious only if you need to create child objects or arrays that have to be explicitly defined first. Other than that the syntax looks like normal object access sytnax. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the values you create are accessed consistently and without typos in your code. Note that you can also access the JsonValue instance directly and get access to the underlying type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JsonObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JsonValue internally stores properties keys and values in collections and you can iterate over them at runtime. You can also manipulate the collections if you need to to get the object structure to look exactly like you want. Again, if you've used ExpandoObject before JsonObject/Value are very similar in the behavior of the structure. Reading JSON strings into JsonValue The JsonValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JsonValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:[TestMethod] public void JsonValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"",""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JsonValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JsonValue object and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JsonPrimitive and I have to assign them to their appropriate types first before I can do type comparisons. The dynamic properties will automatically cast to the right type expected as long as the compiler can resolve the type of the assignment or usage. The AreEqual() method oesn't as it expects two object instances and comparing json.Company to "West Wind" is comparing two different types (JsonPrimitive to String) which fails. So the intermediary assignment is required to make the test pass. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1977, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/B00008BXJ4/ref=as_li_ss_tl?ie=UTF8&tag=westwindtechn-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""67280fb8"", ""AlbumName"": ""Echoes, Silence, Patience & Grace"", ""Artist"": ""Foo Fighters"", ""YearReleased"": 2007, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/41mtlesQPVL._SL500_AA280_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/B000UFAURI/ref=as_li_ss_tl?ie=UTF8&tag=westwindtechn-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B000UFAURI"", ""Songs"": [ { ""AlbumId"": ""67280fb8"", ""SongName"": ""The Pretender"", ""SongLength"": ""4:29"" }, { ""AlbumId"": ""67280fb8"", ""SongName"": ""Let it Die"", ""SongLength"": ""4:05"" }, { ""AlbumId"": ""67280fb8"", ""SongName"": ""Erase/Replay"", ""SongLength"": ""4:13"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; dynamic albums = JsonValue.Parse(jsonString); foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName ); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName);}   It's pretty sweet how easy it becomes to parse even complex JSON and then just run through the object using object syntax, yet without an explicit type in the mix. In fact it looks and feels a lot like if you were using JavaScript to parse through this data, doesn't it? And that's the point…© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  JSON   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Integrate SharePoint 2010 with Team Foundation Server 2010

    - by Martin Hinshelwood
    Our client is using a brand new shiny installation of SharePoint 2010, so we need to integrate our upgraded Team Foundation Server 2010 instance into it. In order to do that you need to run the Team Foundation Server 2010 install on the SharePoint 2010 server and choose to install only the “Extensions for SharePoint Products and Technologies”. We want out upgraded Team Project Collection to create any new portal in this SharePoint 2010 server farm. There a number of goodies above and beyond a solution file that requires the install, with the main one being the TFS2010 client API. These goodies allow proper integration with the creation and viewing of Work Items from SharePoint a new feature with TFS 2010. This works in both SharePoint 2007 and SharePoint 2010 with the level of integration dependant on the version of SharePoint that you are running. There are three levels of integration with “SharePoint Services 3.0” or “SharePoint Foundation 2010” being the lowest. This level only offers reporting services framed integration for reporting along with Work Item Integration and document management. The highest is Microsoft Office SharePoint Services (MOSS) Enterprise with Excel Services integration providing some lovely dashboards. Figure: Dashboards take the guessing out of Project Planning and estimation. Plus writing these reports would be boring!   The Extensions that you need are on the same installation media as the main TFS install and the only difference is the options you pick during the install. Figure: Installing the TFS 2010 Extensions for SharePoint Products and Technologies onto SharePoint 2010   Annoyingly you may need to reboot a couple of times, but on this server the process was MUCH smother than on our internal server. I think this was mostly to do with this being a clean install. Once it is installed you need to run the configuration. This will add all of the Solution and Templates that are needed for SharePoint to work properly with TFS. Figure: This is where all the TFS 2010 goodies are added to your SharePoint 2010 server and the TFS 2010 object model is installed.   Figure: All done, you have everything installed, but you still need to configure it Now that we have the TFS 2010 SharePoint Extensions installed on our SharePoint 2010 server we need to configure them both so that they will talk happily to each other. Configuring the SharePoint 2010 Managed path for Team Foundation Server 2010 In order for TFS to automatically create your project portals you need a wildcard managed path setup. This is where TFS will create the portal during the creation of a new Team project. To find the managed paths page for any application you need to first select the “Managed web applications”  link from the SharePoint 2010 Central Administration screen. Figure: Find the “Manage web applications” link under the “Application Management” section. On you are there you will see that the “Managed Paths” are there, they are just greyed out and selecting one of the applications will enable it to be clicked. Figure: You need to select an application for the SharePoint 2010 ribbon to activate.   Figure: You need to select an application before you can get to the Managed Paths for that application. Now we need to add a managed path for TFS 2010 to create its portals under. I have gone for the obvious option of just calling the managed path “TFS02” as the TFS 2010 server is the second TFS server that the client has installed, TFS 2008 being the first. This links the location to the server name, and as you can’t have two projects of the same name in two separate project collections there is unlikely to be any conflicts. Figure: Add a “tfs02” wildcard inclusion path to your SharePoint site. Configure the Team Foundation Server 2010 connection to SharePoint 2010 In order to have you new TFS 2010 Server talk to and create sites in SharePoint 2010 you need to tell the TFS server where to put them. As this TFS 2010 server was installed in out-of-the-box mode it has a SharePoint Services 3.0 (the free one) server running on the same box. But we want to change that so we can use the external SharePoint 2010 instance. Just open the “Team Foundation Server Administration Console” and navigate to the “SharePoint Web Applications” section. Here you click “Add” and enter the details for the Managed path we just created. Figure: If you have special permissions on your SharePoint you may need to add accounts to the “Service Accounts” section.    Before we can se this new SharePoint 2010 instance to be the default for our upgraded Team Project Collection we need to configure SharePoint to take instructions from our TFS server. Configure SharePoint 2010 to connect to Team Foundation Server 2010 On your SharePoint 2010 server open the Team Foundation Server Administration Console and select the “Extensions for SharePoint Products and Technologies” node. Here we need to “grant access” for our TFS 2010 server to create sites. Click the “Grant access” link and  fill out the full URL to the  TFS server, for example http://servername.domain.com:8080/tfs, and if need be restrict the path that TFS sites can be created on. Remember that when the users create a new team project they can change the default and point it anywhere they like as long as it is an authorised SharePoint location. Figure: Grant access for your TFS 2010 server to create sites in SharePoint 2010 Now that we have an authorised location for our team project portals to be created we need to tell our Team Project Collection that this is where it should stick sites by default for any new Team Projects created. Configure the Team Foundation Server 2010 Team Project Collection to create new sites in SharePoint 2010 Back on out TFS 2010 server we need to setup the defaults for our upgraded Team Project Collection to the new SharePoint 2010 integration we have just set up. On the TFS 2010 server open up the “Team Foundation Server Administration Console” again and navigate to the “Team Project Collections” node. Once you are there you will see a list of all of your TPC’s and in our case we have a DefaultCollection as well as out named and Upgraded collection for TFS 2008. If you select the “SharePoint Site” tab we can see that it is not currently configured. Figure: Our new Upgrade TFS2008 Team Project Collection does not have SharePoint configured Select to “Edit Default Site Location” and select the new integration point that we just set up for SharePoint 2010. Once you have selected the “SharePoint Web Application” (the thing we just configured) then it will give you an example based on that configuration point and the name of the Team Project Collection that we are configuring. Figure: Set the default location for new Team Project Portals to be created for this Team Project Collection This is where the reason for configuring the Extensions on the SharePoint 2010 server before doing this last bit becomes apparent. TFS 2010 is going to create a site at our http://sharepointserver/tfs02/ location called http://sharepointserver/tfs02/[TeamProjectCollection], or whatever we had specified, and it would have had difficulty doing this if we had not given it permission first. Figure: If there is no Team Project Collection site at this location the TFS 2010 server is going to create one This will create a nice Team Project Collection parent site to contain the Portals for any new Team Projects that are created. It is with noting that it will not create portals for existing Team Projects as this process is run during the Team Project Creation wizard. Figure: Just a basic parent site to host all of your new Team Project Portals as sub sites   You will need to add all of the users that will be creating Team Projects to be Administrators of this site so that they will not get an error during the Project Creation Wizard. You may also want to customise this as a proper portal to your projects if you are going to be having lots of them, but it is really just a default placeholder so you have a top level site that you can backup and point at. You have now integrated SharePoint 2010 and team Foundation Server 2010! You can now go forth and multiple your Team Projects for this Team Project Collection or you can continue to add portals to your other Collections.   Technorati Tags: TFS 2010,Sharepoint 2010,VS ALM

    Read the article

  • the size of apt-get update lists is too big

    - by dumb906
    I ran a clean install to Ubuntu 12.04 and so far everything has been working well. I especially commend the Ubuntu team for this release. I only noticed that the size of repository update is now about ~13MB. Normally, it is about this size for the first time you run apt-get update after a clean install and then ~ 23kb - 1300kb for subsequent updates. The output from apt-get update is the same I get for previous versions of Ubuntu (its pretty normal). Its a bit too long but look at an example output I got from running apt-get update. Ign http://archive.canonical.com precise InRelease Ign http://dl.google.com stable InRelease Ign http://dl.google.com stable InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net precise InRelease Hit http://download.virtualbox.org precise InRelease Ign http://security.ubuntu.com precise-security InRelease Ign http://linux.dropbox.com precise InRelease Ign http://extras.ubuntu.com precise InRelease Ign http://download.skype.com stable InRelease Hit http://archive.canonical.com precise Release.gpg Get:1 http://dl.google.com stable Release.gpg [198 B] Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net oneiric InRelease Ign http://ppa.launchpad.net precise InRelease Get:2 http://security.ubuntu.com precise-security Release.gpg [198 B] Get:3 http://extras.ubuntu.com precise Release.gpg [72 B] Hit http://download.virtualbox.org precise/contrib i386 Packages Ign http://download.skype.com stable Release.gpg Hit http://linux.dropbox.com precise Release.gpg Ign http://us.archive.ubuntu.com precise InRelease Ign http://us.archive.ubuntu.com precise-updates InRelease Ign http://us.archive.ubuntu.com precise-backports InRelease Hit http://archive.canonical.com precise Release Get:4 http://dl.google.com stable Release.gpg [198 B] Ign http://ppa.launchpad.net oneiric InRelease Ign http://ppa.launchpad.net oneiric InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net precise InRelease Hit http://ppa.launchpad.net precise Release.gpg Hit http://ppa.launchpad.net precise Release.gpg Get:5 http://security.ubuntu.com precise-security Release [49.6 kB] Hit http://extras.ubuntu.com precise Release Ign http://download.skype.com stable Release Ign http://download.virtualbox.org precise/contrib TranslationIndex Get:6 http://us.archive.ubuntu.com precise Release.gpg [198 B] Hit http://archive.canonical.com precise/partner i386 Packages Hit http://linux.dropbox.com precise Release Get:7 http://ppa.launchpad.net precise Release.gpg [316 B] Hit http://ppa.launchpad.net precise Release.gpg Hit http://ppa.launchpad.net precise Release.gpg Hit http://extras.ubuntu.com precise/main Sources Get:8 http://ppa.launchpad.net precise Release.gpg [316 B] Hit http://ppa.launchpad.net precise Release.gpg Hit http://ppa.launchpad.net precise Release.gpg Hit http://ppa.launchpad.net precise Release.gpg Hit http://ppa.launchpad.net precise Release.gpg Get:9 http://us.archive.ubuntu.com precise-updates Release.gpg [198 B] Ign http://archive.canonical.com precise/partner TranslationIndex Ign http://download.skype.com stable/non-free i386 Packages/DiffIndex Get:10 http://dl.google.com stable Release [1,347 B] Hit http://linux.dropbox.com precise/main i386 Packages Hit http://ppa.launchpad.net precise Release.gpg Hit http://ppa.launchpad.net oneiric Release.gpg Hit http://extras.ubuntu.com precise/main i386 Packages Ign http://extras.ubuntu.com precise/main TranslationIndex Hit http://ppa.launchpad.net precise Release.gpg Hit http://ppa.launchpad.net oneiric Release.gpg Hit http://ppa.launchpad.net oneiric Release.gpg Hit http://ppa.launchpad.net precise Release.gpg Hit http://ppa.launchpad.net precise Release.gpg Get:11 http://us.archive.ubuntu.com precise-backports Release.gpg [198 B] Ign http://download.skype.com stable/non-free TranslationIndex Get:12 http://dl.google.com stable Release [1,347 B] Hit http://ppa.launchpad.net precise Release.gpg Hit http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net precise Release Ign http://linux.dropbox.com precise/main TranslationIndex Hit http://ppa.launchpad.net precise Release Ign http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net precise Release Get:13 http://ppa.launchpad.net precise Release [11.9 kB] Get:14 http://us.archive.ubuntu.com precise Release [49.6 kB] Hit http://download.skype.com stable/non-free i386 Packages Get:15 http://dl.google.com stable/main i386 Packages [1,268 B] Ign http://dl.google.com stable/main TranslationIndex Hit http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net oneiric Release Hit http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net oneiric Release Get:16 http://security.ubuntu.com precise-security/main Sources [7,089 B] Hit http://ppa.launchpad.net oneiric Release Get:17 http://dl.google.com stable/main i386 Packages [769 B] Ign http://dl.google.com stable/main TranslationIndex Hit http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net precise/main Sources Hit http://ppa.launchpad.net precise/main i386 Packages Get:18 http://security.ubuntu.com precise-security/restricted Sources [14 B] Get:19 http://security.ubuntu.com precise-security/universe Sources [3,653 B] Get:20 http://security.ubuntu.com precise-security/multiverse Sources [696 B] Get:21 http://security.ubuntu.com precise-security/main i386 Packages [32.9 kB] Ign http://ppa.launchpad.net precise/main TranslationIndex Hit http://ppa.launchpad.net precise/main Sources Hit http://ppa.launchpad.net precise/main i386 Packages Ign http://ppa.launchpad.net precise/main TranslationIndex Get:22 http://us.archive.ubuntu.com precise-updates Release [49.6 kB] Ign http://ppa.launchpad.net precise/main Sources/DiffIndex Ign http://ppa.launchpad.net precise/main i386 Packages/DiffIndex Ign http://ppa.launchpad.net precise/main TranslationIndex Hit http://ppa.launchpad.net precise/main Sources Hit http://ppa.launchpad.net precise/main i386 Packages Get:23 http://security.ubuntu.com precise-security/restricted i386 Packages [14 B] Get:24 http://security.ubuntu.com precise-security/universe i386 Packages [8,594 B] Get:25 http://security.ubuntu.com precise-security/multiverse i386 Packages [1,393 B] Hit http://security.ubuntu.com precise-security/main TranslationIndex Hit http://security.ubuntu.com precise-security/multiverse TranslationIndex Hit http://security.ubuntu.com precise-security/restricted TranslationIndex Hit http://security.ubuntu.com precise-security/universe TranslationIndex Ign http://ppa.launchpad.net precise/main TranslationIndex Get:26 http://us.archive.ubuntu.com precise-backports Release [49.6 kB] Hit http://ppa.launchpad.net precise/main Sources Hit http://ppa.launchpad.net precise/main i386 Packages Ign http://ppa.launchpad.net precise/main TranslationIndex Get:27 http://ppa.launchpad.net precise/main i386 Packages [1,276 B] Ign http://ppa.launchpad.net precise/main TranslationIndex Hit http://ppa.launchpad.net precise/main Sources Hit http://ppa.launchpad.net precise/main i386 Packages Ign http://ppa.launchpad.net precise/main TranslationIndex Hit http://ppa.launchpad.net precise/main Sources Get:28 http://us.archive.ubuntu.com precise/main Sources [934 kB] Hit http://ppa.launchpad.net precise/main i386 Packages Ign http://ppa.launchpad.net precise/main TranslationIndex Hit http://ppa.launchpad.net precise/main Sources Hit http://ppa.launchpad.net precise/main i386 Packages Ign http://ppa.launchpad.net precise/main TranslationIndex Hit http://ppa.launchpad.net precise/main i386 Packages Hit http://security.ubuntu.com precise-security/main Translation-en Hit http://security.ubuntu.com precise-security/multiverse Translation-en Hit http://security.ubuntu.com precise-security/restricted Translation-en Ign http://ppa.launchpad.net precise/main TranslationIndex Hit http://ppa.launchpad.net precise/main i386 Packages Ign http://ppa.launchpad.net precise/main TranslationIndex Hit http://ppa.launchpad.net oneiric/main Sources Hit http://ppa.launchpad.net oneiric/main i386 Packages Ign http://ppa.launchpad.net oneiric/main TranslationIndex Hit http://ppa.launchpad.net precise/main i386 Packages Ign http://ppa.launchpad.net precise/main TranslationIndex Hit http://ppa.launchpad.net oneiric/main Sources Hit http://security.ubuntu.com precise-security/universe Translation-en Ign http://archive.canonical.com precise/partner Translation-en_US Hit http://ppa.launchpad.net oneiric/main i386 Packages Ign http://ppa.launchpad.net oneiric/main TranslationIndex Hit http://ppa.launchpad.net oneiric/main Sources Hit http://ppa.launchpad.net oneiric/main i386 Packages Ign http://ppa.launchpad.net oneiric/main TranslationIndex Hit http://ppa.launchpad.net precise/main Sources Ign http://extras.ubuntu.com precise/main Translation-en_US Ign http://download.virtualbox.org precise/contrib Translation-en_US Ign http://archive.canonical.com precise/partner Translation-en Hit http://ppa.launchpad.net precise/main i386 Packages Ign http://ppa.launchpad.net precise/main TranslationIndex Hit http://ppa.launchpad.net precise/main Sources Hit http://ppa.launchpad.net precise/main i386 Packages Ign http://ppa.launchpad.net precise/main TranslationIndex Ign http://extras.ubuntu.com precise/main Translation-en Ign http://download.virtualbox.org precise/contrib Translation-en Hit http://ppa.launchpad.net precise/main Sources Hit http://ppa.launchpad.net precise/main i386 Packages Ign http://ppa.launchpad.net precise/main TranslationIndex Hit http://ppa.launchpad.net precise/main Sources Ign http://linux.dropbox.com precise/main Translation-en_US Hit http://ppa.launchpad.net precise/main i386 Packages Ign http://download.skype.com stable/non-free Translation-en_US Ign http://linux.dropbox.com precise/main Translation-en Ign http://download.skype.com stable/non-free Translation-en Ign http://dl.google.com stable/main Translation-en_US Ign http://dl.google.com stable/main Translation-en Ign http://dl.google.com stable/main Translation-en_US Get:29 http://us.archive.ubuntu.com precise/restricted Sources [5,470 B] Get:30 http://us.archive.ubuntu.com precise/universe Sources [5,019 kB] Ign http://dl.google.com stable/main Translation-en Get:31 http://us.archive.ubuntu.com precise/multiverse Sources [155 kB] Get:32 http://us.archive.ubuntu.com precise/main i386 Packages [1,274 kB] Get:33 http://us.archive.ubuntu.com precise/restricted i386 Packages [8,431 B] Get:34 http://us.archive.ubuntu.com precise/universe i386 Packages [4,796 kB] Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net oneiric/main Translation-en_US Ign http://ppa.launchpad.net oneiric/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net oneiric/main Translation-en_US Ign http://ppa.launchpad.net oneiric/main Translation-en Ign http://ppa.launchpad.net oneiric/main Translation-en_US Ign http://ppa.launchpad.net oneiric/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Get:35 http://us.archive.ubuntu.com precise/multiverse i386 Packages [121 kB] Hit http://us.archive.ubuntu.com precise/main TranslationIndex Hit http://us.archive.ubuntu.com precise/multiverse TranslationIndex Hit http://us.archive.ubuntu.com precise/restricted TranslationIndex Hit http://us.archive.ubuntu.com precise/universe TranslationIndex Get:36 http://us.archive.ubuntu.com precise-updates/main Sources [31.2 kB] Get:37 http://us.archive.ubuntu.com precise-updates/restricted Sources [765 B] Get:38 http://us.archive.ubuntu.com precise-updates/universe Sources [10.1 kB] Get:39 http://us.archive.ubuntu.com precise-updates/multiverse Sources [696 B] Get:40 http://us.archive.ubuntu.com precise-updates/main i386 Packages [96.5 kB] Get:41 http://us.archive.ubuntu.com precise-updates/restricted i386 Packages [770 B] Get:42 http://us.archive.ubuntu.com precise-updates/universe i386 Packages [27.7 kB] Get:43 http://us.archive.ubuntu.com precise-updates/multiverse i386 Packages [1,393 B] Hit http://us.archive.ubuntu.com precise-updates/main TranslationIndex Hit http://us.archive.ubuntu.com precise-updates/multiverse TranslationIndex Hit http://us.archive.ubuntu.com precise-updates/restricted TranslationIndex Hit http://us.archive.ubuntu.com precise-updates/universe TranslationIndex Get:44 http://us.archive.ubuntu.com precise-backports/main Sources [700 B] Get:45 http://us.archive.ubuntu.com precise-backports/restricted Sources [14 B] Get:46 http://us.archive.ubuntu.com precise-backports/universe Sources [1,680 B] Get:47 http://us.archive.ubuntu.com precise-backports/multiverse Sources [14 B] Get:48 http://us.archive.ubuntu.com precise-backports/main i386 Packages [559 B] Get:49 http://us.archive.ubuntu.com precise-backports/restricted i386 Packages [14 B] Get:50 http://us.archive.ubuntu.com precise-backports/universe i386 Packages [1,391 B] Get:51 http://us.archive.ubuntu.com precise-backports/multiverse i386 Packages [14 B] Hit http://us.archive.ubuntu.com precise-backports/main TranslationIndex Hit http://us.archive.ubuntu.com precise-backports/multiverse TranslationIndex Hit http://us.archive.ubuntu.com precise-backports/restricted TranslationIndex Hit http://us.archive.ubuntu.com precise-backports/universe TranslationIndex Hit http://us.archive.ubuntu.com precise/main Translation-en Hit http://us.archive.ubuntu.com precise/multiverse Translation-en Hit http://us.archive.ubuntu.com precise/restricted Translation-en Hit http://us.archive.ubuntu.com precise/universe Translation-en Hit http://us.archive.ubuntu.com precise-updates/main Translation-en Hit http://us.archive.ubuntu.com precise-updates/multiverse Translation-en Hit http://us.archive.ubuntu.com precise-updates/restricted Translation-en Hit http://us.archive.ubuntu.com precise-updates/universe Translation-en Hit http://us.archive.ubuntu.com precise-backports/main Translation-en Hit http://us.archive.ubuntu.com precise-backports/multiverse Translation-en Hit http://us.archive.ubuntu.com precise-backports/restricted Translation-en Hit http://us.archive.ubuntu.com precise-backports/universe Translation-en Fetched 12.8 MB in 1min 33s (137 kB/s) Is this a new feature in 12.04? Or, if it is unintended, is there a way I can fix this? Thanks.

    Read the article

  • This task is currently locked by a running workflow and cannot be edited. Limitation to both Nintex and SPD workflow

    - by ybbest
    Note, this post is from Nintex Forum here. These limitations apply to both SharePoint designer Workflow and Nintex Workflow as Nintex using the SharePoint workflow engine. The common cause that I experience is that ‘parent’ workflow is generating more than one task at once. This is common as you can have multiple approvers for certain approval process. You could also have workflow running when the task is created, one of the common scenario is you would like to set a custom column value in your approval task. For me this is huge limitation, as Nintex lover I really hope Nintex could solve this problem with Microsoft going forward. Introduction “This task is currently locked by a running workflow and cannot be edited” is a common message that is seen when an error occurs while the SharePoint workflow engine is processing a task item associated with a workflow. When a workflow processes a task normally, the following sequence of events is expected to occur: 1.       The process begins. 2.       The workflow places a ‘lock’ on the task so nothing else can change the values while the workflow is processing. 3.       The workflow processes the task. 4.       The lock is released when the task processing is finished. When the message is encountered, it usually indicates that an error occurred between step 2 and 4. As a result, the lock is never released. Therefore, the ‘task locked’ message is not an error itself, rather a symptom of another error – the ‘task locked’ message does not indicate what went wrong. In most cases, once this message is encountered, the workflow cannot be made to continue and must be terminated and started again. The following is a guide that can help troubleshoot the cause of these messages.  Some initial observations to narrow down the potential causes are: Is the error consistent or intermittent? When the error is consistent, it will happen every time the workflow is run. When it is intermittent, it may happen regularly, but not every time. Does the error occur the first time the user tries to respond to a task, or do they respond and notice the workflow does not continue, and when they respond again the error occurs? If the message is present when the user first responds to the task, the issue would have occurred when the task was created. Otherwise, it would have occurred when the user attempted to respond to the task. Causes Modifying the task list A cause of this error appearing consistently the first time a user tries to respond to a task is a modification to the default task list schema. For example, changing the ‘Assigned to’ field in a task list to be a multiple selection will cause the behaviour. Deleting the workflow task then restoring it from the Recycle bin If you start a workflow, delete the workflow task then restore it from the Recycle Bin in SharePoint, the workflow will fail with the ‘task locked’ error.  This is confirmed behaviour whether using a SharePoint Designer or a Nintex workflow.  You will need to terminate the workflow and start it again. Parallel simultaneous responses A cause of this error appearing inconsistently is multiple users responding to tasks in parallel at the same time. In this scenario, one task will complete correctly and the other will not process. When the user tries again, the ‘task locked’ message will display. Nintex included a workaround for this issue in build 11000. In build 11000 and later, one of the users will receive a message on the task form when they attempt to respond, stating that they need to try again in a few moments. Additional processing on the task A cause of this error appearing consistently and inconsistently is having an additional system running on the items in the task list. Some examples include: a workflow running on the task list, an event receiver running on the task list or another automated process querying and updating workflow tasks. Note: This Microsoft help article (http://office.microsoft.com/en-us/sharepointdesigner/HA102376561033.aspx#5) explains creating a workflow that runs on the task list to update a field on the task. Our experience shows that this causes the ‘Task Locked’ issues when the ‘parent’ workflow is generating more than one task at once. Isolated system error If the error is a rare event, or a ‘one off’ event, then an isolated system error may have occurred. For example, if there is a database connectivity issue while the workflow is processing the task response, the task will lock. In this case, the user will respond to a task but the workflow will not continue. When they respond again, the ‘task locked’ message will display. In this case, there will be an error in the SharePoint ULS Logs at the time that the user originally responded. Temporary delay while workflow processes If the workflow is taking a long time to process after a user submits a task, they may notice and try to respond to the task again. They will see the task locked error, but after a number of attempts (or after waiting some time) the task response page eventually indicates the task has been responded to. In this case, nothing actually went wrong, and the error message gives an accurate indication of what is happening – the workflow temporarily locked the task while it was processing. This scenario may occur in a very large workflow, or after the SharePoint application pool has just started. Modifying the task via a web service with an invalid url If the Nintex Workflow web service is used to respond to or delegate a task, the site context part of the url must be a valid alternative access mapping url. For example, if you access the web service via the IP address of the SharePoint server, and the IP address is not a valid AAM, the task can become locked. The workflow has become stuck without any apparent errors This behaviour can occur as a result of a bug in the SharePoint 2010 workflow engine.  If you do not have the August 2010 Cumulative Update (or later) for SharePoint, and your workflow uses delays, “Flexi-task”, State machine”, “Task Reminder” actions or variables, you could be affected. Check the SharePoint 2010 Updates site here: http://technet.microsoft.com/en-us/sharepoint/ff800847.  The October CU is recommended http://support.microsoft.com/kb/2553031.   The fix is described as “Consider the following scenario. You add a Delay activity to a workflow. Then, you set the duration for the Delay activity. You deploy the workflow in SharePoint Foundation 2010. In this scenario, the workflow is not resumed after the duration of the Delay activity”. If you find this is occurring in your environment, install the October CU, terminate all the running workflows affected and run them afresh. Investigative steps The first step to isolate the issue is to create a new task list on the site and configure the workflow to use it.  Any customizations that were made to the original task list should not be made to the new task list. If the new task list eliminates the issue, then the cause can be attributed to the original task list or a change that was made to it. To change the task list that the workflow uses: In Workflow Designer select Settings -> Startup Options Then configure the task list as required If any of the scenarios above do not help, check the SharePoint logs for any messages with a category of ‘Workflow Infrastructure’. Conclusion The information in this article has been gathered from observations and investigations by Nintex. The sources of these issues are the underlying SharePoint workflow engine. This article will be updated if further causes are discovered. From <http://connect.nintex.com/forums/thread/6503.aspx>

    Read the article

  • Solution: Testing Web Services with MSTest on Team Build

    - by Martin Hinshelwood
    Guess what. About 20 minutes after I fixed the build, Allan broke it again! Update: 4th March 2010 – After having huge problems getting this working I read Billy Wang’s post which showed me the light. The problem here is that even though the test passes locally it will not during an Automated Build. When you send your tests to the build server it does not understand that you want to spin up the web site and run tests against that! When you run the test in Visual Studio it spins up the web site anyway, but would you expect your test to pass if you told the website not to spin up? Of course not. So, when you send the code to the build server you need to tell it what to spin up. First, the best way to get the parameters you need is to right click on the method you want to test and select “Create Unit Test”. This will detect wither you are running in IIS or ASP.NET Development Server or None, and create the relevant tags. Figure: Right clicking on “SaveDefaultProjectFile” will produce a context menu with “Create Unit tests…” on it. If you use this option it will AutoDetect most of the Attributes that are required. /// <summary> ///A test for SSW.SQLDeploy.SilverlightUI.Web.Services.IProfileService.SaveDefaultProjectFile ///</summary> // TODO: Ensure that the UrlToTest attribute specifies a URL to an ASP.NET page (for example, // http://.../Default.aspx). This is necessary for the unit test to be executed on the web server, // whether you are testing a page, web service, or a WCF service. [TestMethod()] [HostType("ASP.NET")] [AspNetDevelopmentServerHost("D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web", "/")] [UrlToTest("http://localhost:3100/")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] public void SaveDefaultProjectFileTest() { IProfileService target = new ProfileService(); // TODO: Initialize to an appropriate value string strComputerName = string.Empty; // TODO: Initialize to an appropriate value bool expected = false; // TODO: Initialize to an appropriate value bool actual; actual = target.SaveDefaultProjectFile(strComputerName); Assert.AreEqual(expected, actual); Assert.Inconclusive("Verify the correctness of this test method."); } Figure: Auto created code that shows the attributes required to run correctly in IIS or in this case ASP.NET Development Server If you are a purist and don’t like creating unit tests like this then you just need to add the three attributes manually. HostType – This attribute specified what host to use. Its an extensibility point, so you could write your own. Or you could just use “ASP.NET”. UrlToTest – This specifies the start URL. For most tests it does not matter which page you call, as long as it is a valid page otherwise your test may not run on the server, but may pass anyway. AspNetDevelopmentServerHost – This is a nasty one, it is only used if you are using ASP.NET Development Host and is unnecessary if you are using IIS. This sets the host settings and the first value MUST be the physical path to the root of your web application. OK, so all that was rubbish and I could not get anything working using the MSDN documentation. Google provided very little help until I ran into Billy Wang’s post  and I heard that heavenly music that all developers hear when understanding dawns that what they have been doing up until now is just plain stupid. I am sure that the above will work when I am doing Web Unit Tests, but there is a much easier way when doing web services. You need to add the AspNetDevelopmentServer attribute to your code. This will tell MSTest to spin up an ASP.NET Development server to host the service. Specify the path to the web application you want to use. [AspNetDevelopmentServer("WebApp1", "D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: This AspNetDevelopmentServer will make sure that the specified web application is launched. Now we can run the test and have it pass, but if the dynamically assigned ASP.NET Development server port changes what happens to the details in your app.config that was generated when creating a reference to the web service? Well, it would be wrong and the test would fail. This is where Billy’s helper method comes in. Once you have created an instance of your service call, and it has loaded the config, but before you make any calls to it you need to go in and dynamically set the Endpoint address to the same address as your dynamically hosted Web Application. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.VisualStudio.TestTools.UnitTesting; using System.Reflection; using System.ServiceModel.Description; using System.ServiceModel; namespace SSW.SQLDeploy.Test { class WcfWebServiceHelper { public static bool TryUrlRedirection(object client, TestContext context, string identifier) { bool result = true; try { PropertyInfo property = client.GetType().GetProperty("Endpoint"); string webServer = context.Properties[string.Format("AspNetDevelopmentServer.{0}", identifier)].ToString(); Uri webServerUri = new Uri(webServer); ServiceEndpoint endpoint = (ServiceEndpoint)property.GetValue(client, null); EndpointAddressBuilder builder = new EndpointAddressBuilder(endpoint.Address); builder.Uri = new Uri(endpoint.Address.Uri.OriginalString.Replace(endpoint.Address.Uri.Authority, webServerUri.Authority)); endpoint.Address = builder.ToEndpointAddress(); } catch (Exception e) { context.WriteLine(e.Message); result = false; } return result; } } } Figure: This fixes a problem with the URL in your web.config not being the same as the dynamically hosted ASP.NET Development server port. We can now add a call to this method after we created the Proxy object and change the Endpoint for the Service to the correct one. This process is wrapped in an assert as if it fails there is no point in continuing. [AspNetDevelopmentServer("WebApp1", D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Editing the Endpoint from the app.config on the fly to match the dynamically hosted ASP.NET Development Server URL and port is now easy. As you can imagine AspNetDevelopmentServer poses some problems of you have multiple developers. What are the chances of everyone using the same location to store the source? What about if you are using a build server, how do you tell MSTest where to look for the files? To the rescue is a property called" “%PathToWebRoot%” which is always right on the build server. It will always point to your build drop folder for your solutions web sites. Which will be “\\tfs.ssw.com.au\BuildDrop\[BuildName]\Debug\_PrecompiledWeb\” or whatever your build drop location is. So lets change the code above to add this. [AspNetDevelopmentServer("WebApp1", "%PathToWebRoot%\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Adding %PathToWebRoot% to the AspNetDevelopmentServer path makes it work everywhere. Now we have another problem… this will ONLY run on the build server and will fail locally as %PathToWebRoot%’s default value is “C:\Users\[profile]\Documents\Visual Studio 2010\Projects”. Well this sucks… How do we get the test to run on any build server and any developer laptop. Open “Tools | Options | Test Tools | Test Execution” in Visual Studio and you will see a field called “Web application root directory”. This is where you override that default above. Figure: You can override the default website location for tests. In my case I would put in “D:\Workspaces\SSW\SSW\SqlDeploy\DEV\Main” and all the developers working with this branch would put in the folder that they have mapped. Can you see a problem? What is I create a “$/SSW/SqlDeploy/DEV/34567” branch from Main and I want to run tests in there. Well… I would have to change the value above. This is not ideal, but as you can put your projects anywhere on a computer, it has to be done. Conclusion Although this looks convoluted and complicated there are real problems being solved here that mean that you have a test ANYWHERE solution. Any build server, any Developer workstation. Resources: http://billwg.blogspot.com/2009/06/testing-wcf-web-services.html http://tough-to-find.blogspot.com/2008/04/testing-asmx-web-services-in-visual.html http://msdn.microsoft.com/en-us/library/ms243399(VS.100).aspx http://blogs.msdn.com/dscruggs/archive/2008/09/29/web-tests-unit-tests-the-asp-net-development-server-and-code-coverage.aspx http://www.5z5.com/News/?543f8bc8b36b174f Technorati Tags: VS2010,MSTest,Team Build 2010,Team Build,Visual Studio,Visual Studio 2010,Visual Studio ALM,Team Test,Team Test 2010

    Read the article

  • Quantal: Broken apt-index, cant fix dependencies

    - by arcyqwerty
    I can't seem to add/remove/update packages Ubuntu software update has a notice about partial upgrades but fails Seems to be similar to this problem $ sudo apt-get update Ign http://archive.ubuntu.com quantal InRelease Ign http://security.ubuntu.com precise-security InRelease Ign http://us.archive.ubuntu.com precise InRelease Ign http://extras.ubuntu.com precise InRelease Ign http://us.archive.ubuntu.com precise-updates InRelease Ign http://us.archive.ubuntu.com precise-backports InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://archive.canonical.com precise InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net precise InRelease Hit http://archive.ubuntu.com quantal Release.gpg Get:1 http://security.ubuntu.com precise-security Release.gpg [198 B] Hit http://extras.ubuntu.com precise Release.gpg Get:2 http://us.archive.ubuntu.com precise Release.gpg [198 B] Get:3 http://us.archive.ubuntu.com precise-updates Release.gpg [198 B] Hit http://archive.canonical.com precise Release.gpg Hit http://ppa.launchpad.net precise Release.gpg Hit http://ppa.launchpad.net precise Release.gpg Hit http://archive.ubuntu.com quantal Release Get:4 http://security.ubuntu.com precise-security Release [49.6 kB] Get:5 http://us.archive.ubuntu.com precise-backports Release.gpg [198 B] Get:6 http://us.archive.ubuntu.com precise Release [49.6 kB] Hit http://extras.ubuntu.com precise Release Hit http://archive.canonical.com precise Release Hit http://ppa.launchpad.net precise Release.gpg Hit http://ppa.launchpad.net precise Release Hit http://archive.ubuntu.com quantal/main amd64 Packages Hit http://extras.ubuntu.com precise/main Sources Hit http://archive.canonical.com precise/partner Sources Hit http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net precise Release Get:7 http://us.archive.ubuntu.com precise-updates Release [49.6 kB] Hit http://extras.ubuntu.com precise/main amd64 Packages Hit http://extras.ubuntu.com precise/main i386 Packages Hit http://archive.ubuntu.com quantal/main i386 Packages Hit http://archive.ubuntu.com quantal/main Translation-en Hit http://archive.canonical.com precise/partner amd64 Packages Hit http://archive.canonical.com precise/partner i386 Packages Hit http://ppa.launchpad.net precise/main Sources Hit http://ppa.launchpad.net precise/main amd64 Packages Hit http://ppa.launchpad.net precise/main i386 Packages Get:8 http://us.archive.ubuntu.com precise-backports Release [49.6 kB] Hit http://ppa.launchpad.net precise/main Sources Get:9 http://security.ubuntu.com precise-security/main Sources [22.5 kB] Hit http://ppa.launchpad.net precise/main amd64 Packages Hit http://ppa.launchpad.net precise/main i386 Packages Hit http://ppa.launchpad.net precise/main Sources Hit http://ppa.launchpad.net precise/main amd64 Packages Hit http://ppa.launchpad.net precise/main i386 Packages Get:10 http://us.archive.ubuntu.com precise/main Sources [934 kB] Get:11 http://security.ubuntu.com precise-security/restricted Sources [14 B] Get:12 http://security.ubuntu.com precise-security/universe Sources [7,832 B] Ign http://archive.ubuntu.com quantal/main Translation-en_US Get:13 http://security.ubuntu.com precise-security/multiverse Sources [713 B] Get:14 http://security.ubuntu.com precise-security/main amd64 Packages [67.8 kB] Ign http://archive.canonical.com precise/partner Translation-en_US Get:15 http://security.ubuntu.com precise-security/restricted amd64 Packages [14 B] Get:16 http://security.ubuntu.com precise-security/universe amd64 Packages [18.8 kB] Ign http://extras.ubuntu.com precise/main Translation-en_US Ign http://archive.canonical.com precise/partner Translation-en Get:17 http://security.ubuntu.com precise-security/multiverse amd64 Packages [1,155 B] Get:18 http://security.ubuntu.com precise-security/main i386 Packages [70.2 kB] Ign http://extras.ubuntu.com precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Get:19 http://security.ubuntu.com precise-security/restricted i386 Packages [14 B] Get:20 http://security.ubuntu.com precise-security/universe i386 Packages [19.0 kB] Get:21 http://security.ubuntu.com precise-security/multiverse i386 Packages [1,394 B] Ign http://ppa.launchpad.net precise/main Translation-en Hit http://security.ubuntu.com precise-security/main Translation-en Hit http://security.ubuntu.com precise-security/multiverse Translation-en Hit http://security.ubuntu.com precise-security/restricted Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Hit http://security.ubuntu.com precise-security/universe Translation-en Ign http://security.ubuntu.com precise-security/main Translation-en_US Ign http://security.ubuntu.com precise-security/multiverse Translation-en_US Ign http://security.ubuntu.com precise-security/restricted Translation-en_US Ign http://security.ubuntu.com precise-security/universe Translation-en_US Get:22 http://us.archive.ubuntu.com precise/restricted Sources [5,470 B] Get:23 http://us.archive.ubuntu.com precise/universe Sources [5,019 kB] Get:24 http://us.archive.ubuntu.com precise/multiverse Sources [155 kB] Get:25 http://us.archive.ubuntu.com precise/main amd64 Packages [1,273 kB] Get:26 http://us.archive.ubuntu.com precise/restricted amd64 Packages [8,452 B] Get:27 http://us.archive.ubuntu.com precise/universe amd64 Packages [4,786 kB] Get:28 http://us.archive.ubuntu.com precise/multiverse amd64 Packages [119 kB] Get:29 http://us.archive.ubuntu.com precise/main i386 Packages [1,274 kB] Get:30 http://us.archive.ubuntu.com precise/restricted i386 Packages [8,431 B] Get:31 http://us.archive.ubuntu.com precise/universe i386 Packages [4,796 kB] Get:32 http://us.archive.ubuntu.com precise/multiverse i386 Packages [121 kB] Hit http://us.archive.ubuntu.com precise/main Translation-en Hit http://us.archive.ubuntu.com precise/multiverse Translation-en Hit http://us.archive.ubuntu.com precise/restricted Translation-en Hit http://us.archive.ubuntu.com precise/universe Translation-en Get:33 http://us.archive.ubuntu.com precise-updates/main Sources [124 kB] Get:34 http://us.archive.ubuntu.com precise-updates/restricted Sources [1,379 B] Get:35 http://us.archive.ubuntu.com precise-updates/universe Sources [30.9 kB] Get:36 http://us.archive.ubuntu.com precise-updates/multiverse Sources [1,058 B] Get:37 http://us.archive.ubuntu.com precise-updates/main amd64 Packages [311 kB] Get:38 http://us.archive.ubuntu.com precise-updates/restricted amd64 Packages [2,417 B] Get:39 http://us.archive.ubuntu.com precise-updates/universe amd64 Packages [85.4 kB] Get:40 http://us.archive.ubuntu.com precise-updates/multiverse amd64 Packages [1,829 B] Get:41 http://us.archive.ubuntu.com precise-updates/main i386 Packages [314 kB] Get:42 http://us.archive.ubuntu.com precise-updates/restricted i386 Packages [2,439 B] Get:43 http://us.archive.ubuntu.com precise-updates/universe i386 Packages [85.9 kB] Get:44 http://us.archive.ubuntu.com precise-updates/multiverse i386 Packages [2,047 B] Hit http://us.archive.ubuntu.com precise-updates/main Translation-en Hit http://us.archive.ubuntu.com precise-updates/multiverse Translation-en Hit http://us.archive.ubuntu.com precise-updates/restricted Translation-en Hit http://us.archive.ubuntu.com precise-updates/universe Translation-en Get:45 http://us.archive.ubuntu.com precise-backports/main Sources [1,845 B] Get:46 http://us.archive.ubuntu.com precise-backports/restricted Sources [14 B] Get:47 http://us.archive.ubuntu.com precise-backports/universe Sources [11.1 kB] Get:48 http://us.archive.ubuntu.com precise-backports/multiverse Sources [1,383 B] Get:49 http://us.archive.ubuntu.com precise-backports/main amd64 Packages [1,271 B] Get:50 http://us.archive.ubuntu.com precise-backports/restricted amd64 Packages [14 B] Get:51 http://us.archive.ubuntu.com precise-backports/universe amd64 Packages [9,701 B] Get:52 http://us.archive.ubuntu.com precise-backports/multiverse amd64 Packages [996 B] Get:53 http://us.archive.ubuntu.com precise-backports/main i386 Packages [1,271 B] Get:54 http://us.archive.ubuntu.com precise-backports/restricted i386 Packages [14 B] Get:55 http://us.archive.ubuntu.com precise-backports/universe i386 Packages [9,703 B] Get:56 http://us.archive.ubuntu.com precise-backports/multiverse i386 Packages [999 B] Hit http://us.archive.ubuntu.com precise-backports/main Translation-en Hit http://us.archive.ubuntu.com precise-backports/multiverse Translation-en Hit http://us.archive.ubuntu.com precise-backports/restricted Translation-en Hit http://us.archive.ubuntu.com precise-backports/universe Translation-en Ign http://us.archive.ubuntu.com precise/main Translation-en_US Ign http://us.archive.ubuntu.com precise/multiverse Translation-en_US Ign http://us.archive.ubuntu.com precise/restricted Translation-en_US Ign http://us.archive.ubuntu.com precise/universe Translation-en_US Ign http://us.archive.ubuntu.com precise-updates/main Translation-en_US Ign http://us.archive.ubuntu.com precise-updates/multiverse Translation-en_US Ign http://us.archive.ubuntu.com precise-updates/restricted Translation-en_US Ign http://us.archive.ubuntu.com precise-updates/universe Translation-en_US Ign http://us.archive.ubuntu.com precise-backports/main Translation-en_US Ign http://us.archive.ubuntu.com precise-backports/multiverse Translation-en_US Ign http://us.archive.ubuntu.com precise-backports/restricted Translation-en_US Ign http://us.archive.ubuntu.com precise-backports/universe Translation-en_US Fetched 19.9 MB in 34s (571 kB/s) Reading package lists... Done $ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: netbase : Breaks: ifupdown (< 0.7) Breaks: ifupdown:i386 (< 0.7) E: Unmet dependencies. Try using -f. $ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: dh-apparmor html2text libmail-sendmail-perl libsys-hostname-long-perl Use 'apt-get autoremove' to remove them. The following extra packages will be installed: ifupdown Suggested packages: rdnssd The following packages will be upgraded: ifupdown 1 upgraded, 0 newly installed, 0 to remove and 1179 not upgraded. 85 not fully installed or removed. Need to get 0 B/54.1 kB of archives. After this operation, 19.5 kB of additional disk space will be used. Do you want to continue [Y/n]? (Reading database ... 222498 files and directories currently installed.) Preparing to replace ifupdown 0.7~beta2ubuntu8 (using .../ifupdown_0.7.1ubuntu1_amd64.deb) ... Unpacking replacement ifupdown ... dpkg: error processing /var/cache/apt/archives/ifupdown_0.7.1ubuntu1_amd64.deb (--unpack): trying to overwrite '/etc/init.d/networking', which is also in package netbase 5.0ubuntu1 Processing triggers for man-db ... Processing triggers for ureadahead ... Errors were encountered while processing: /var/cache/apt/archives/ifupdown_0.7.1ubuntu1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) cat /etc/apt/sources.list # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/main/binary-i386/ # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/restricted/binary-i386/ # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ precise main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://us.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://us.archive.ubuntu.com/ubuntu/ precise-updates main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://us.archive.ubuntu.com/ubuntu/ precise universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise universe deb http://us.archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://us.archive.ubuntu.com/ubuntu/ precise multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ precise multiverse deb http://us.archive.ubuntu.com/ubuntu/ precise-updates multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://us.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse deb http://security.ubuntu.com/ubuntu precise-security main restricted deb-src http://security.ubuntu.com/ubuntu precise-security main restricted deb http://security.ubuntu.com/ubuntu precise-security universe deb-src http://security.ubuntu.com/ubuntu precise-security universe deb http://security.ubuntu.com/ubuntu precise-security multiverse deb-src http://security.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu precise partner deb-src http://archive.canonical.com/ubuntu precise partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu precise main deb-src http://extras.ubuntu.com/ubuntu precise main deb http://archive.ubuntu.com/ubuntu/ quantal main

    Read the article

< Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >