Search Results

Search found 751 results on 31 pages for 'craig miles'.

Page 27/31 | < Previous Page | 23 24 25 26 27 28 29 30 31  | Next Page >

  • Updating Windows DNS records from a remote windows DNS server

    - by Luckyboy
    Does anyone know if it is possible for a windows 2003 DNS server to update the records for a domain so that it contains all the records of a domain of of a remotely based DNS server? Im almost certain that doesn't quite explain the problem so I shall illustrate with an example: We have two offices, both are based about 100 miles apart. One deals with IT (Intranet development etc.) while the other is a call centre that uses the Intranet systems. Currently each office has its own DNS server, with the IT office's and call centre's DNS servers containing entries for intranet site. The difference is that the IT DNS server records point to the various servers that host the Intranet sites (e.g. intranetsite1 - 192.168.1.10, intranetsite2 - 192.168.1.11) while all of the entries in the call centre's DNS point to the IT office's DNS server (intranetsite1 - [it office ip address], intranetsite2 - [it office ip address]). Is there any way that the call centre's DNS server could automatically add all DNS records hosted by the IT office's DNS, translating the IP addresses to the IP address of the IT office?

    Read the article

  • SAN Replication for Fault tolerance using EVA4400

    - by Sergei
    Hi Everyone, I hope that someone would point me in the correct direction - it looks like I have no enough konwledge in the subject and timeframes are too tight for me to explore different scenarios in depth.. We have two datacenters few miles away from each other connected by 100 Mbps link.Each datacenter will have 5 BL490 blades with ESX Standard hosting about 50 VMs. Eac hsite has HP eva4400 SAN with SAN replication set up.VC is going to be in the first datacenter and both datacenter are networked. SAN Replication is block level so it seems like I cannot just replicate changes but all writes would have to be replicated.This should not be a problem as link can sustain about 1.8 TB a dayand data can be buffered. I am having trouble however visioning how recovery would work in this case.We don't need instant recovery , I would say 4 hours recovery time is accepted so fancy automatic SRM like DR scenario would not be easily accepted due to the financial reasons, however any comments are welcomed. Current idea is following: replicate LUNs from primary site to the secondary.When disaster strikes, IT personnel switches on ESX hosts on the remote side and connects replicated LUNS to them, then registers VMs and changes IP address. I understand that this seems like horribly manual process and I almost sure I have missed some obvious pitfalls here. Could someone let me know what direction should I go?An articles regarding the subject? This is a brand new setup and we would rather build up basic recovery process and scale it later.I just need to have a right direction to allow for such scalability. Thank you very much in advance!

    Read the article

  • SCCM 2012 - some remote clients unable to download some applications, 401.2 error

    - by growse
    I've got a small SCCM 2012 deployment with about 35 clients attached. Most of these clients are in the same network as the single SCCM host, but three are about 1000 miles away. Oddly, these three clients have stopped being able to download some application packages over BITS. Publishing a new package works for all the other clients, but for these three it never seems to download. If I go to the software centre, it just hangs at "0% downloaded". On the client, the DataTransfer.log says (repeatedly): CDTSJob::HandleErrors: DTS Job '{2DCBBB4C-6D84-479A-9218-885B72C834B9}' BITS Job '{E78147DD-4A26-4942-B4FD-6EC3EB77EECD}' under user 'S-1-5-18' OldErrorCount 442 NewErrorCount 443 ErrorCode 0x80072EE2 DataTransferService 30/07/2012 09:27:41 2964 (0x0B94) CDTSJob::HandleErrors: DTS Job ID='{2DCBBB4C-6D84-479A-9218-885B72C834B9}' URL='http://sccm-host:80/SMS_DP_SMSPKG$/Content_3e7f6982-6346-4f27-ae00-ad5dcb391455.1' ProtType=1 DataTransferService 30/07/2012 09:27:41 2964 (0x0B94) Cas.log says (repeatedly): Location update from CTM for content Content_3e7f6982-6346-4f27-ae00-ad5dcb391455.1 and request {AD041FCB-03D2-4FE6-A6FA-38A6B80FB2A1} ContentAccess 30/07/2012 08:33:39 5048 (0x13B8) Download location found 0 - http://lonsbrndsccm02.mcs.int.thomsonreuters.com/SMS_DP_SMSPKG$/Content_3e7f6982-6346-4f27-ae00-ad5dcb391455.1 ContentAccess 30/07/2012 08:33:39 5048 (0x13B8) Download request only, ignoring location update ContentAccess 30/07/2012 08:33:39 5048 (0x13B8) On the server, I've enabled failed request log tracing. The raw IIS log says the following: 2012-07-30 08:28:42 10.13.111.35 GET /SMS_DP_SMSPKG$/Content_3e7f6982-6346-4f27-ae00-ad5dcb391455.1/sccm /NSCP-0.4.0.172-x64.msi 80 - 10.2.27.19 Microsoft+BITS/7.5 401 2 5 293 Which is a 401.2 error, meaning access denied. The failed request log is large, but the punchline is that it chucks out a Unauthorized: Access is denied due to invalid credentials. message. All clients are members of the same domain and appear to be (otherwise) working great. I've re-installed the SCCM client, deleted and re-added the computer to SCCM. Some other packages seem to work fine, the daily anti-malware delta gets downloaded and patched without issue. Why are these packages failing?

    Read the article

  • How could Google Latitude find my exact PC location with no GPS or public wifi?

    - by Mike
    I found a similar question here but I still don't get it. You see, I live in a small town and every time I check my IP location via online services or speed test websites, my location appears to be my ISP server location (which in my case is 250 miles away). But when I tried Google latitude, it pinpointed my exact location within less than 100 meters! I use Windows Vista, Google Chrome, and when I got the message that "Google is trying to locate you", I agreed just to check what the result will be. It was scary, very scary! What I've come up after reading the above link is that Google have a kind of extensive WiFi database locations. That could be understandable with the case of public and open WiFis that are used with a lot of people. Some of them might be using applications that could gather location data and somehow this information ends up in giant Google databases. From those, Google could pinpoint a WiFi location based on its MAC address along with these bits of info that have been gathered via various sources. The issue here is that my WiFi is private, I don't even broadcast my WiFi name. So how on earth did Google find my exact PC location? Please break down the answer in layman's terms as possible.

    Read the article

  • CentOS 5.5 remote kickstart installation stalls at "Starting install process." How to debug?

    - by ewwhite
    Hello, I'm having a difficult time with a remote CentOS 5.5 kickstart installation on an HP ProLiant DL360 G6. This is in an environment where I maintain an internal CentOS yum repository. The kickstart installation and post scripts have been tested and normally work. This hardware is also common in this environment, so I do not believe that it is a factor. Unfortunately, I'm having problems with a specific server install. The system is remote to the yum repository at a distance of 500 miles. They are connected over a private low-latency 100-megabit layer 2 connection (26ms round-trip). I'm mounting the 10mb CentOS 5 netinstall ISO image via an HP ILO remote console. The initial boot parameters are: linux ks=http://yum.abctrading.com/prop.cfg ksdevice=eth0 ip=x.x.x.x dns=x.x.x.x netmask=255.255.255.0 gateway=x.x.x.x I'm using the url --url http://ks.abctrading.com/5.5/os/x86_64/ method of installation. This quickly boots into the anaconda installer, pulls the kickstart config and formats the drives. The process eventually halts at the screen below, reading "Starting install process.". Going to the other virtual consoles give the second image below. The process stalls at this point and cannot proceed with the rest of the installation. Running the same kickstart config locally works just fine. I've tried mounting the boot ISO from the console as well as from the ILO2 command line pointing to a locally-hosted boot ISO via http. How can I debug this? Are there any options I've overlooked?

    Read the article

  • Database mirroring login failure attempts on mirror server

    - by Chandan
    I have configured database mirroring between two servers at a distance 40 miles away from each other. Server specifications: SQL Server 2008,Standard Edition 64-bit This is same for principal,mirror and witness. The configuration is high-safety with automatic failover Initially we tested our .net application(web application) on both the principal and mirror and made sure that the login is not orpahned. Things run fine generally.But sometimes on the mirror server,I see login failed attempts: Login failed for user 'd0main\user'. Reason: Failed to open the explicitly specified database. [CLIENT: xx.xx.x.x] Message Error: 18456, Severity: 14, State: 38. This error appears 3-4 times a day but not more than that. My question to the experts is:If the principal is alive so why the application tries to connect to mirror.The default time-out for a .net webpage is 30 seconds,so is it possible that the application tries to connect principal and after 30 seconds even if principal is alive,it assumes that it is dead and thus tries to open a connection to mirror where it fails. Please help me with this problem.

    Read the article

  • Is there any program (or code, any language) that will mute all of the microphones on my computer?

    - by Sean
    Is there any program (or code, any language) that will mute all of the microphones on my computer? If it is code, please make it as simple as possible, the only language I know is C# and I am still VERY new to it. I just want to setup some way to mute my microphones from a hotkey/shortcut, and if I can just find a program that can do, I will be set. As I said, I can also do a little bit if it is in C#, but the only code I have seen before for this, was miles long (atleast to me). My goal, is I just want a program that opens up, and toggles the mute on the microhpones (all of the system audio input) then closes. That is it, very simple. Thank you to anyone who trys to helps me! EDIT: Yes, I am using Windows. I am using Windows 7 32-bit. I already know that I can go into the volume mixer and do it that way, but I need to do this while running a full screen application, and it is a hassle to have to exit fullscreen, open the volume mixer, the click the mute icon, then go back into full screen and the the whole thing over again to unmute it. And I will be toggling back and forth quite often, so it just takes alot to do that so much.

    Read the article

  • Wireless very slow on one laptop on network, all other machines normal?

    - by th3dude19
    My new laptop (Acer Aspire Timeine 3810TZ running Windows 7 Home Premium 64bit) is acting very strange on my wireless network. Below are the issues I'm noticing... The connection frequently drops. I see the icon change from 'full bars' to 'empty bars with yellow star (meaning no connection)' occasionally. Almost every website I visit (Firefox) hangs for a long time on 'Looking up www.amazon.com' for example. After a long pause, it finally starts loading the website. Neither of these problems exist on any other machines on my network. I also have a desktop running the same OS wirelessly and it works fine. I've run several Speedtest.net tests and the speeds are great (20MBit down/4 up). Results from pingtest.net are as follows: Line quality: D Ping: 46ms Jitter: 65ms Packet Loss: 9% These results are to a server that is less than 10 miles from my residence. The results on the other machines in my house are normal. Any suggestions? This is becoming very annoying as I purchased this machine primarily for browsing.

    Read the article

  • Automated Linux VMs on Hyper-V 2012

    - by Mick
    I have a requirement to create a ton of linux VMs for our customers (we run managed infrastructure) on Hyper-V 2012 in the coming months and I have an issue with automating it. Here is how I need it to work: User accesses their web page and creates a VM. VM is created with a unique IP and name User logs in over SSH I know Hyper-V quite well and can work with powershell and am a C# programmer so the development side of things is taken care of. I also know enough about Linux to be at least competent: I have used it on and off for a number of years but not done anything Enterprise-level with it. All this can be done easily by manual processes but I need to be able to script or program this to automate it as there could be hundreds of them being created but I don't know how. My first thought is to have a database with random-generated names and IPs already created but I don't know how to get a Linux VM to boot up and grab one from the database... I suppose a Kickstart script would take care of it but I don't know what to do from there. Here is what is bouncing around in my head: Create a std linux build. - Easy to do Someone clicks "Create VM" and I pull a name and IP from the database and write it to a kickstart script. - Easy to do I could then open the template VHDX file and copy in the script and then save it. - Not sure if possible User boots up new VM and the kickstart script gives it the name and IP I assigned it. My problem is that I don't know how to open a VHDX file and insert a kickstart script into it... can't figure it out. I am reaching here and this solution may be miles off... I am more used to creating Windows VMs with scripts and so on which i am more familiar with... any help would be appreciated. Thanks Mick

    Read the article

  • iCloud backup merges or overwrites?

    - by Joe McMahon
    The following happened today (it was six AM my time, so yeah, I was dumb and dropped stitches in this process): A friend had a problem with her iPhone and needed to reset it. Unfortunately she did the reset while connected to iTunes and the restore process kicked in. In my sleepy state, I told her to go ahead. She did, and restored the most recent local (iTunes) backup (from July last year - she doesn't back up often, as she has an Air which is pretty full). During setup on the phone, she was prompted to merge data with the iCloud copy, and did so. There was no "restore from iCloud" prompt. Obviously I should have made sure she was disconnected from iTunes before she did the reset, or had her set it up as a new device and then restored from iCloud, but water under the bridge now. (Side question: could I have had her disconnect and then restart the phone again and avoid this whole process?) The question is: was the "merge" that happened in this process a true merge, or a replace? Her passwords for Mail were wrong, since they were the old ones from the old backup. If she does the wipe data and restore from iCloud, will she get her old SMSes and calendar entries back? Or did the merge decide that the phone, despite it being "old" was right and therefore the SMSes, calendar entries, etc. were discarded? As a recovery option, I have a 4-day-old iTunes backup here from ~/Library/Application Support/MobileSync/Backup, but she and the phone are 3000 miles away, and it's 8GB, so I can't easily restore it for her. I do have the option of encrypting it and mailing it on a data stick if the iCloud backup is now toast. Should she try the wipe and restore from the cloud (after backing up locally), or should I just get the more-recent backup in the mail? My goal is to get everything (especially the SMSes) back to the most recent version possible.

    Read the article

  • Faster, secure, protocol/code required for long-distance transfer.

    - by Chopper3
    I've ran into a problem and I'm looking for a new secure protocol/client/server that's faster over a 1Gb/s fibre link - let me tell you the story... I have a pair of redundant, diversely-routed, 1Gb/s links over a distance of around 250 miles or so (not dark fibre but a dedicated point to point link, not a mesh). At the 'client' end I have a HP DL380 G5 (2 x dual-core 2.66Ghz Xeon's, 4GB, Windows 2003EE 32-bit), at the 'server' end I have a HP BL460c G6 (2 x quad-core 2.53Ghz Xeons, 48GB, Oracle Linux 5.3 64-bit). I need to transfer around 500 x 2GB files per week from the client to the server machines per week - but the transfer NEEDS to be secure. Using both iPerf or regular FTP I can get ~80MB/s of transfer pretty consistently, which is great. Using WinSCP or Windows SFTP I can't seem to get more that ~3-4MB/s, at this point the server's CPU is 3% busy while CPU0 of the client goes to ~30% utilised. We've tried editing various TCP window sizes with little success. Both ends are connected to quite low-usage Cisco Cat6509's with Sup720's. I can replace the client machine with a newer machine and/or move it to Linux - but this will take time. Clearly these single-threaded secure Windows clients are introducing too much latency doing their encryption. So a few questions/thoughts; Are there any higher performing secure protocols or client software for Windows that I could try? I'm pretty protocol-gnostic so long as it'll work between Windows and Linux. Should I be using hardware to do the encryption, either in the client or the network parts? If so what would you recommend? I'm not convinced that just swapping the server would be that much faster, the CPU was only at 30% but then again that's higher than I'd have expected given the load - moving to Linux at the client end may be a better idea but would be quite disruptive. Am I missing a trick? Thanks in advance.

    Read the article

  • Sync, share and backup policy using NAS

    - by Cue
    Trying to come up with a way to keep in sync while sharing and keeping a backup of my music/photos and movies. Currently I have an iMac in Greece and a MBP with me in the UK. As a result I've ended up with 2 iPhoto and iTunes libraries not to mention Documents scattered here and there, user settings etc. I also like to have a backup in case of a drive failure or the need to clean install. It seems that iPhoto and iTunes don't work really well with networked libraries. The way I think about it is to have a NAS where I keep my iTunes and iPhoto library but also rsync daily to my MBP to have a local copy. That way my files are shared across the network as well as act like a backup. In addition I get to have my files wherever I take my MBP but also have the ability to clean install. The tricky part comes from keeping in sync the iMac which is miles away. Again I'm considering a mirror setup (NAS, rsync to the iMac) as well as an rsync between the two NAS. It pretty much resembles the way Dropbox works, sans the requirement to go through their servers but I'm no "superuser" and don't really know if it is even feasible to have such a setup. Looks like there are so many things that can go wrong.

    Read the article

  • Throughput and why do ISPs sell too much bandwidth?

    - by jonescb
    I hope the question made sense how I worded it. :) I've been wondering, maximum theoretical bandwidth is measured as RWIN/RTT (Window size / round trip time) Source 1 and Souce 2 So if a major city only 100 miles away gives me a ping of 50ms, and I have the default 64kb TCP window size then my maximum throughput will be 12.5Mb/s. Everything further away would give me a higher ping and therefore a lower throughput. Is there any reason to buy something like FiOS with a 50Mb/s or greater connection? Will you ever be able to reach that kind of speed? I know you can increase the TCP window size to increase throughput, but it has to be at both ends which is a deal breaker because you can't control the server. I'm assuming other network protocols like UDP aren't quite as affected by latency as TCP is, but how much of overall network traffic does non-TCP make up vs TCP. Am I just misguided about how throughput works? But if the above is correct, then why should a consumer like me buy way more bandwidth than can be realistically used. Maybe the only reason is for downloading multiple things at once, or one thing from multiple servers/peers?

    Read the article

  • Does anyone know the touchpad disabling driver for Dell xps 14?

    - by rkar
    I accidentally deleted my Dell xps 14 touch pad disabling driver and I don't know where to find it and reinstall it. I have already tried Dell support and without luck (I don't want to install something that I don't know). Would anyone please send me the link to that driver To clarify: There is this Fn shortcut key used to disable/enable touch pad in Dell xps14 and when press it orange light on touch pad will lit and touch pad will stop working. But after deleting the driver that responsible for that function,it stopped working. My service center is almost 600 miles away and he said that he forgot to add it last time he fix my laptop. Since the internet connection at his place is slow,he can't send me from mail either.So can anyone send me the link for that driver. Since I don't really know about drivers it would be really nice if some one show me the driver name or link. Sorry,here is my laptop service tag "C37KWL1".I don't know how to find the specific driver for that function key.My dell has a short cut for disabling touchpad with picture on it along with the other multimedia short cut key with picture. since xps 15 and 17 have seperated touch key instead of on function keys mine have to choose function key or multimedia key through setting.To be honest my service center tech guy forgot to install it when he repair it and can't send me the file(which is about over 20mb or something according to him)for some reason.All i need is that particular file.

    Read the article

  • Is it possible to add a WiFi HotSpot to an already established LAN, keep the two separate, and not modify the primary router?

    - by user12844
    I have a set up where my Cisco ASA is sitting in one facility, providing access to the Internet for two buildings. The two buildings are geographically separated by a Wireless Bridge spanning about 10 miles. All computers and equipment inside the LAN are on the same subnet (its pretty small) and we have WiFi AP's in both locations providing Wired and Wireless access to the LAN. Given all the BYOD (Ipods, and SmartPhones etc...) coming into the office as well as Visiting reps etc... we would like to also provide a non-secure, device independent (the devices cannot see or communicate with each other), and LAN independent (the devices cannot see or use anything on the LAN) HotSpot that anyone could use for their Devices that gives them access to the Internet ONLY without needing a password. I get that this could be possible at the facility with my Cisco if I messed with it and created VLANs etc... but then I would need to get it across my Bridge as well and don't think that would be possible without serious reconfiguration of everything. Would really like some kind of magic drop in solution that can kind of piggy back on my LAN without really needing to do very many if any changes to the current set up.

    Read the article

  • Red Gate Coder interviews: Alex Davies

    - by Michael Williamson
    Alex Davies has been a software engineer at Red Gate since graduating from university, and is currently busy working on .NET Demon. We talked about tackling parallel programming with his actors framework, a scientific approach to debugging, and how JavaScript is going to affect the programming languages we use in years to come. So, if we start at the start, how did you get started in programming? When I was seven or eight, I was given a BBC Micro for Christmas. I had asked for a Game Boy, but my dad thought it would be better to give me a proper computer. For a year or so, I only played games on it, but then I found the user guide for writing programs in it. I gradually started doing more stuff on it and found it fun. I liked creating. As I went into senior school I continued to write stuff on there, trying to write games that weren’t very good. I got a real computer when I was fourteen and found ways to write BASIC on it. Visual Basic to start with, and then something more interesting than that. How did you learn to program? Was there someone helping you out? Absolutely not! I learnt out of a book, or by experimenting. I remember the first time I found a loop, I was like “Oh my God! I don’t have to write out the same line over and over and over again any more. It’s amazing!” When did you think this might be something that you actually wanted to do as a career? For a long time, I thought it wasn’t something that you would do as a career, because it was too much fun to be a career. I thought I’d do chemistry at university and some kind of career based on chemical engineering. And then I went to a careers fair at school when I was seventeen or eighteen, and it just didn’t interest me whatsoever. I thought “I could be a programmer, and there’s loads of money there, and I’m good at it, and it’s fun”, but also that I shouldn’t spoil my hobby. Now I don’t really program in my spare time any more, which is a bit of a shame, but I program all the rest of the time, so I can live with it. Do you think you learnt much about programming at university? Yes, definitely! I went into university knowing how to make computers do anything I wanted them to do. However, I didn’t have the language to talk about algorithms, so the algorithms course in my first year was massively important. Learning other language paradigms like functional programming was really good for breadth of understanding. Functional programming influences normal programming through design rather than actually using it all the time. I draw inspiration from it to write imperative programs which I think is actually becoming really fashionable now, but I’ve been doing it for ages. I did it first! There were also some courses on really odd programming languages, a bit of Prolog, a little bit of C. Having a little bit of each of those is something that I would have never done on my own, so it was important. And then there are knowledge-based courses which are about not programming itself but things that have been programmed like TCP. Those are really important for examples for how to approach things. Did you do any internships while you were at university? Yeah, I spent both of my summers at the same company. I thought I could code well before I went there. Looking back at the crap that I produced, it was only surpassed in its crappiness by all of the other code already in that company. I’m so much better at writing nice code now than I used to be back then. Was there just not a culture of looking after your code? There was, they just didn’t hire people for their abilities in that area. They hired people for raw IQ. The first indicator of it going wrong was that they didn’t have any computer scientists, which is a bit odd in a programming company. But even beyond that they didn’t have people who learnt architecture from anyone else. Most of them had started straight out of university, so never really had experience or mentors to learn from. There wasn’t the experience to draw from to teach each other. In the second half of my second internship, I was being given tasks like looking at new technologies and teaching people stuff. Interns shouldn’t be teaching people how to do their jobs! All interns are going to have little nuggets of things that you don’t know about, but they shouldn’t consistently be the ones who know the most. It’s not a good environment to learn. I was going to ask how you found working with people who were more experienced than you… When I reached Red Gate, I found some people who were more experienced programmers than me, and that was difficult. I’ve been coding since I was tiny. At university there were people who were cleverer than me, but there weren’t very many who were more experienced programmers than me. During my internship, I didn’t find anyone who I classed as being a noticeably more experienced programmer than me. So, it was a shock to the system to have valid criticisms rather than just formatting criticisms. However, Red Gate’s not so big on the actual code review, at least it wasn’t when I started. We did an entire product release and then somebody looked over all of the UI of that product which I’d written and say what they didn’t like. By that point, it was way too late and I’d disagree with them. Do you think the lack of code reviews was a bad thing? I think if there’s going to be any oversight of new people, then it should be continuous rather than chunky. For me I don’t mind too much, I could go out and get oversight if I wanted it, and in those situations I felt comfortable without it. If I was managing the new person, then maybe I’d be keener on oversight and then the right way to do it is continuously and in very, very small chunks. Have you had any significant projects you’ve worked on outside of a job? When I was a teenager I wrote all sorts of stuff. I used to write games, I derived how to do isomorphic projections myself once. I didn’t know what the word was so I couldn’t Google for it, so I worked it out myself. It was horrifically complicated. But it sort of tailed off when I started at university, and is now basically zero. If I do side-projects now, they tend to be work-related side projects like my actors framework, NAct, which I started in a down tools week. Could you explain a little more about NAct? It is a little C# framework for writing parallel code more easily. Parallel programming is difficult when you need to write to shared data. Sometimes parallel programming is easy because you don’t need to write to shared data. When you do need to access shared data, you could just have your threads pile in and do their work, but then you would screw up the data because the threads would trample on each other’s toes. You could lock, but locks are really dangerous if you’re using more than one of them. You get interactions like deadlocks, and that’s just nasty. Actors instead allows you to say this piece of data belongs to this thread of execution, and nobody else can read it. If you want to read it, then ask that thread of execution for a piece of it by sending a message, and it will send the data back by a message. And that avoids deadlocks as long as you follow some obvious rules about not making your actors sit around waiting for other actors to do something. There are lots of ways to write actors, NAct allows you to do it as if it was method calls on other objects, which means you get all the strong type-safety that C# programmers like. Do you think that this is suitable for the majority of parallel programming, or do you think it’s only suitable for specific cases? It’s suitable for most difficult parallel programming. If you’ve just got a hundred web requests which are all independent of each other, then I wouldn’t bother because it’s easier to just spin them up in separate threads and they can proceed independently of each other. But where you’ve got difficult parallel programming, where you’ve got multiple threads accessing multiple bits of data in multiple ways at different times, then actors is at least as good as all other ways, and is, I reckon, easier to think about. When you’re using actors, you presumably still have to write your code in a different way from you would otherwise using single-threaded code. You can’t use actors with any methods that have return types, because you’re not allowed to call into another actor and wait for it. If you want to get a piece of data out of another actor, then you’ve got to use tasks so that you can use “async” and “await” to await asynchronously for it. But other than that, you can still stick things in classes so it’s not too different really. Rather than having thousands of objects with mutable state, you can use component-orientated design, where there are only a few mutable classes which each have a small number of instances. Then there can be thousands of immutable objects. If you tend to do that anyway, then actors isn’t much of a jump. If I’ve already built my system without any parallelism, how hard is it to add actors to exploit all eight cores on my desktop? Usually pretty easy. If you can identify even one boundary where things look like messages and you have components where some objects live on one side and these other objects live on the other side, then you can have a granddaddy object on one side be an actor and it will parallelise as it goes across that boundary. Not too difficult. If we do get 1000-core desktop PCs, do you think actors will scale up? It’s hard. There are always in the order of twenty to fifty actors in my whole program because I tend to write each component as actors, and I tend to have one instance of each component. So this won’t scale to a thousand cores. What you can do is write data structures out of actors. I use dictionaries all over the place, and if you need a dictionary that is going to be accessed concurrently, then you could build one of those out of actors in no time. You can use queuing to marshal requests between different slices of the dictionary which are living on different threads. So it’s like a distributed hash table but all of the chunks of it are on the same machine. That means that each of these thousand processors has cached one small piece of the dictionary. I reckon it wouldn’t be too big a leap to start doing proper parallelism. Do you think it helps if actors get baked into the language, similarly to Erlang? Erlang is excellent in that it has thread-local garbage collection. C# doesn’t, so there’s a limit to how well C# actors can possibly scale because there’s a single garbage collected heap shared between all of them. When you do a global garbage collection, you’ve got to stop all of the actors, which is seriously expensive, whereas in Erlang garbage collections happen per-actor, so they’re insanely cheap. However, Erlang deviated from all the sensible language design that people have used recently and has just come up with crazy stuff. You can definitely retrofit thread-local garbage collection to .NET, and then it’s quite well-suited to support actors, even if it’s not baked into the language. Speaking of language design, do you have a favourite programming language? I’ll choose a language which I’ve never written before. I like the idea of Scala. It sounds like C#, only with some of the niggles gone. I enjoy writing static types. It means you don’t have to writing tests so much. When you say it doesn’t have some of the niggles? C# doesn’t allow the use of a property as a method group. It doesn’t have Scala case classes, or sum types, where you can do a switch statement and the compiler checks that you’ve checked all the cases, which is really useful in functional-style programming. Pattern-matching, in other words. That’s actually the major niggle. C# is pretty good, and I’m quite happy with C#. And what about going even further with the type system to remove the need for tests to something like Haskell? Or is that a step too far? I’m quite a pragmatist, I don’t think I could deal with trying to write big systems in languages with too few other users, especially when learning how to structure things. I just don’t know anyone who can teach me, and the Internet won’t teach me. That’s the main reason I wouldn’t use it. If I turned up at a company that writes big systems in Haskell, I would have no objection to that, but I wouldn’t instigate it. What about things in C#? For instance, there’s contracts in C#, so you can try to statically verify a bit more about your code. Do you think that’s useful, or just not worthwhile? I’ve not really tried it. My hunch is that it needs to be built into the language and be quite mathematical for it to work in real life, and that doesn’t seem to have ended up true for C# contracts. I don’t think anyone who’s tried them thinks they’re any good. I might be wrong. On a slightly different note, how do you like to debug code? I think I’m quite an odd debugger. I use guesswork extremely rarely, especially if something seems quite difficult to debug. I’ve been bitten spending hours and hours on guesswork and not being scientific about debugging in the past, so now I’m scientific to a fault. What I want is to see the bug happening in the debugger, to step through the bug happening. To watch the program going from a valid state to an invalid state. When there’s a bug and I can’t work out why it’s happening, I try to find some piece of evidence which places the bug in one section of the code. From that experiment, I binary chop on the possible causes of the bug. I suppose that means binary chopping on places in the code, or binary chopping on a stage through a processing cycle. Basically, I’m very stupid about how I debug. I won’t make any guesses, I won’t use any intuition, I will only identify the experiment that’s going to binary chop most effectively and repeat rather than trying to guess anything. I suppose it’s quite top-down. Is most of the time then spent in the debugger? Absolutely, if at all possible I will never debug using print statements or logs. I don’t really hold much stock in outputting logs. If there’s any bug which can be reproduced locally, I’d rather do it in the debugger than outputting logs. And with SmartAssembly error reporting, there’s not a lot that can’t be either observed in an error report and just fixed, or reproduced locally. And in those other situations, maybe I’ll use logs. But I hate using logs. You stare at the log, trying to guess what’s going on, and that’s exactly what I don’t like doing. You have to just look at it and see does this look right or wrong. We’ve covered how you get to grip with bugs. How do you get to grips with an entire codebase? I watch it in the debugger. I find little bugs and then try to fix them, and mostly do it by watching them in the debugger and gradually getting an understanding of how the code works using my process of binary chopping. I have to do a lot of reading and watching code to choose where my slicing-in-half experiment is going to be. The last time I did it was SmartAssembly. The old code was a complete mess, but at least it did things top to bottom. There wasn’t too much of some of the big abstractions where flow of control goes all over the place, into a base class and back again. Code’s really hard to understand when that happens. So I like to choose a little bug and try to fix it, and choose a bigger bug and try to fix it. Definitely learn by doing. I want to always have an aim so that I get a little achievement after every few hours of debugging. Once I’ve learnt the codebase I might be able to fix all the bugs in an hour, but I’d rather be using them as an aim while I’m learning the codebase. If I was a maintainer of a codebase, what should I do to make it as easy as possible for you to understand? Keep distinct concepts in different places. And name your stuff so that it’s obvious which concepts live there. You shouldn’t have some variable that gets set miles up the top of somewhere, and then is read miles down to choose some later behaviour. I’m talking from a very much SmartAssembly point of view because the old SmartAssembly codebase had tons and tons of these things, where it would read some property of the code and then deal with it later. Just thousands of variables in scope. Loads of things to think about. If you can keep concepts separate, then it aids me in my process of fixing bugs one at a time, because each bug is going to more or less be understandable in the one place where it is. And what about tests? Do you think they help at all? I’ve never had the opportunity to learn a codebase which has had tests, I don’t know what it’s like! What about when you’re actually developing? How useful do you find tests in finding bugs or regressions? Finding regressions, absolutely. Running bits of code that would be quite hard to run otherwise, definitely. It doesn’t happen very often that a test finds a bug in the first place. I don’t really buy nebulous promises like tests being a good way to think about the spec of the code. My thinking goes something like “This code works at the moment, great, ship it! Ah, there’s a way that this code doesn’t work. Okay, write a test, demonstrate that it doesn’t work, fix it, use the test to demonstrate that it’s now fixed, and keep the test for future regressions.” The most valuable tests are for bugs that have actually happened at some point, because bugs that have actually happened at some point, despite the fact that you think you’ve fixed them, are way more likely to appear again than new bugs are. Does that mean that when you write your code the first time, there are no tests? Often. The chance of there being a bug in a new feature is relatively unaffected by whether I’ve written a test for that new feature because I’m not good enough at writing tests to think of bugs that I would have written into the code. So not writing regression tests for all of your code hasn’t affected you too badly? There are different kinds of features. Some of them just always work, and are just not flaky, they just continue working whatever you throw at them. Maybe because the type-checker is particularly effective around them. Writing tests for those features which just tend to always work is a waste of time. And because it’s a waste of time I’ll tend to wait until a feature has demonstrated its flakiness by having bugs in it before I start trying to test it. You can get a feel for whether it’s going to be flaky code as you’re writing it. I try to write it to make it not flaky, but there are some things that are just inherently flaky. And very occasionally, I’ll think “this is going to be flaky” as I’m writing, and then maybe do a test, but not most of the time. How do you think your programming style has changed over time? I’ve got clearer about what the right way of doing things is. I used to flip-flop a lot between different ideas. Five years ago I came up with some really good ideas and some really terrible ideas. All of them seemed great when I thought of them, but they were quite diverse ideas, whereas now I have a smaller set of reliable ideas that are actually good for structuring code. So my code is probably more similar to itself than it used to be back in the day, when I was trying stuff out. I’ve got more disciplined about encapsulation, I think. There are operational things like I use actors more now than I used to, and that forces me to use immutability more than I used to. The first code that I wrote in Red Gate was the memory profiler UI, and that was an actor, I just didn’t know the name of it at the time. I don’t really use object-orientation. By object-orientation, I mean having n objects of the same type which are mutable. I want a constant number of objects that are mutable, and they should be different types. I stick stuff in dictionaries and then have one thing that owns the dictionary and puts stuff in and out of it. That’s definitely a pattern that I’ve seen recently. I think maybe I’m doing functional programming. Possibly. It’s plausible. If you had to summarise the essence of programming in a pithy sentence, how would you do it? Programming is the form of art that, without losing any of the beauty of architecture or fine art, allows you to produce things that people love and you make money from. So you think it’s an art rather than a science? It’s a little bit of engineering, a smidgeon of maths, but it’s not science. Like architecture, programming is on that boundary between art and engineering. If you want to do it really nicely, it’s mostly art. You can get away with doing architecture and programming entirely by having a good engineering mind, but you’re not going to produce anything nice. You’re not going to have joy doing it if you’re an engineering mind. Architects who are just engineering minds are not going to enjoy their job. I suppose engineering is the foundation on which you build the art. Exactly. How do you think programming is going to change over the next ten years? There will be an unfortunate shift towards dynamically-typed languages, because of JavaScript. JavaScript has an unfair advantage. JavaScript’s unfair advantage will cause more people to be exposed to dynamically-typed languages, which means other dynamically-typed languages crop up and the best features go into dynamically-typed languages. Then people conflate the good features with the fact that it’s dynamically-typed, and more investment goes into dynamically-typed languages. They end up better, so people use them. What about the idea of compiling other languages, possibly statically-typed, to JavaScript? It’s a reasonable idea. I would like to do it, but I don’t think enough people in the world are going to do it to make it pick up. The hordes of beginners are the lifeblood of a language community. They are what makes there be good tools and what makes there be vibrant community websites. And any particular thing which is the same as JavaScript only with extra stuff added to it, although it might be technically great, is not going to have the hordes of beginners. JavaScript is always to be quickest and easiest way for a beginner to start programming in the browser. And dynamically-typed languages are great for beginners. Compilers are pretty scary and beginners don’t write big code. And having your errors come up in the same place, whether they’re statically checkable errors or not, is quite nice for a beginner. If someone asked me to teach them some programming, I’d teach them JavaScript. If dynamically-typed languages are great for beginners, when do you think the benefits of static typing start to kick in? The value of having a statically typed program is in the tools that rely on the static types to produce a smooth IDE experience rather than actually telling me my compile errors. And only once you’re experienced enough a programmer that having a really smooth IDE experience makes a blind bit of difference, does static typing make a blind bit of difference. So it’s not really about size of codebase. If I go and write up a tiny program, I’m still going to get value out of writing it in C# using ReSharper because I’m experienced with C# and ReSharper enough to be able to write code five times faster if I have that help. Any other visions of the future? Nobody’s going to use actors. Because everyone’s going to be running on single-core VMs connected over network-ready protocols like JSON over HTTP. So, parallelism within one operating system is going to die. But until then, you should use actors. More Red Gater Coder interviews

    Read the article

  • Java Resources for Windows Azure

    - by BuckWoody
    Windows Azure is a Platform as a Service – a PaaS – that runs code you write. That code doesn’t just mean the languages on the .NET platform – you can run code from multiple languages, including Java. In fact, you can develop for Windows and SQL Azure using not only Visual Studio but the Eclipse Integrated Development Environment (IDE) as well.  Although not an exhaustive list, here are several links that deal with Java and Windows Azure: Resource Link Windows Azure Java Development Center http://www.windowsazure.com/en-us/develop/java/  Java Development Guidance http://msdn.microsoft.com/en-us/library/hh690943(VS.103).aspx  Running a Java Environment on Windows Azure http://blogs.technet.com/b/port25/archive/2010/10/28/running-a-java-environment-on-windows-azure.aspx  Running a Java Environment on Windows Azure http://blogs.technet.com/b/port25/archive/2010/10/28/running-a-java-environment-on-windows-azure.aspx  Run Java with Jetty in Windows Azure http://blogs.msdn.com/b/dachou/archive/2010/03/21/run-java-with-jetty-in-windows-azure.aspx  Using the plugin for Eclipse http://blogs.msdn.com/b/craig/archive/2011/03/22/new-plugin-for-eclipse-to-get-java-developers-off-the-ground-with-windows-azure.aspx  Run Java with GlassFish in Windows Azure http://blogs.msdn.com/b/dachou/archive/2011/01/17/run-java-with-glassfish-in-windows-azure.aspx  Improving experience for Java developers with Windows  Azure http://blogs.msdn.com/b/interoperability/archive/2011/02/23/improving-experience-for-java-developers-with-windows-azure.aspx  Java Access to SQL Azure via the JDBC Driver for SQL  Server http://blogs.msdn.com/b/brian_swan/archive/2011/03/29/java-access-to-sql-azure-via-the-jdbc-driver-for-sql-server.aspx  How to Get Started with Java, Tomcat on Windows Azure http://blogs.msdn.com/b/usisvde/archive/2011/03/04/how-to-get-started-with-java-tomcat-on-windows-azure.aspx  Deploying Java Applications in Azure http://blogs.msdn.com/b/mariok/archive/2011/01/05/deploying-java-applications-in-azure.aspx  Using the Windows Azure Storage Explorer in Eclipse http://blogs.msdn.com/b/brian_swan/archive/2011/01/11/using-the-windows-azure-storage-explorer-in-eclipse.aspx  Windows Azure Tomcat Solution Accelerator http://archive.msdn.microsoft.com/winazuretomcat  Deploying a Java application to Windows Azure with  Command-line Ant http://java.interoperabilitybridges.com/articles/deploying-a-java-application-to-windows-azure-with-command-line-ant  Video: Open in the Cloud: Windows Azure and Java http://channel9.msdn.com/Events/PDC/PDC10/CS10  AzureRunMe  http://azurerunme.codeplex.com/  Windows Azure SDK for Java http://www.interoperabilitybridges.com/projects/windows-azure-sdk-for-java  AppFabric SDK for Java http://www.interoperabilitybridges.com/projects/azure-java-sdk-for-net-services  Information Cards for Java http://www.interoperabilitybridges.com/projects/information-card-for-java  Apache Stonehenge http://www.interoperabilitybridges.com/projects/apache-stonehenge  Channel 9 Case Study on Java and Windows Azure http://www.microsoft.com/casestudies/Windows-Azure/Gigaspaces/Solution-Provider-Streamlines-Java-Application-Deployment-in-the-Cloud/400000000081   

    Read the article

  • Java Resources for Windows Azure

    - by BuckWoody
    Windows Azure is a Platform as a Service – a PaaS – that runs code you write. That code doesn’t just mean the languages on the .NET platform – you can run code from multiple languages, including Java. In fact, you can develop for Windows and SQL Azure using not only Visual Studio but the Eclipse Integrated Development Environment (IDE) as well.  Although not an exhaustive list, here are several links that deal with Java and Windows Azure: Resource Link Windows Azure Java Development Center http://www.windowsazure.com/en-us/develop/java/  Java Development Guidance http://msdn.microsoft.com/en-us/library/hh690943(VS.103).aspx  Running a Java Environment on Windows Azure http://blogs.technet.com/b/port25/archive/2010/10/28/running-a-java-environment-on-windows-azure.aspx  Running a Java Environment on Windows Azure http://blogs.technet.com/b/port25/archive/2010/10/28/running-a-java-environment-on-windows-azure.aspx  Run Java with Jetty in Windows Azure http://blogs.msdn.com/b/dachou/archive/2010/03/21/run-java-with-jetty-in-windows-azure.aspx  Using the plugin for Eclipse http://blogs.msdn.com/b/craig/archive/2011/03/22/new-plugin-for-eclipse-to-get-java-developers-off-the-ground-with-windows-azure.aspx  Run Java with GlassFish in Windows Azure http://blogs.msdn.com/b/dachou/archive/2011/01/17/run-java-with-glassfish-in-windows-azure.aspx  Improving experience for Java developers with Windows  Azure http://blogs.msdn.com/b/interoperability/archive/2011/02/23/improving-experience-for-java-developers-with-windows-azure.aspx  Java Access to SQL Azure via the JDBC Driver for SQL  Server http://blogs.msdn.com/b/brian_swan/archive/2011/03/29/java-access-to-sql-azure-via-the-jdbc-driver-for-sql-server.aspx  How to Get Started with Java, Tomcat on Windows Azure http://blogs.msdn.com/b/usisvde/archive/2011/03/04/how-to-get-started-with-java-tomcat-on-windows-azure.aspx  Deploying Java Applications in Azure http://blogs.msdn.com/b/mariok/archive/2011/01/05/deploying-java-applications-in-azure.aspx  Using the Windows Azure Storage Explorer in Eclipse http://blogs.msdn.com/b/brian_swan/archive/2011/01/11/using-the-windows-azure-storage-explorer-in-eclipse.aspx  Windows Azure Tomcat Solution Accelerator http://archive.msdn.microsoft.com/winazuretomcat  Deploying a Java application to Windows Azure with  Command-line Ant http://java.interoperabilitybridges.com/articles/deploying-a-java-application-to-windows-azure-with-command-line-ant  Video: Open in the Cloud: Windows Azure and Java http://channel9.msdn.com/Events/PDC/PDC10/CS10  AzureRunMe  http://azurerunme.codeplex.com/  Windows Azure SDK for Java http://www.interoperabilitybridges.com/projects/windows-azure-sdk-for-java  AppFabric SDK for Java http://www.interoperabilitybridges.com/projects/azure-java-sdk-for-net-services  Information Cards for Java http://www.interoperabilitybridges.com/projects/information-card-for-java  Apache Stonehenge http://www.interoperabilitybridges.com/projects/apache-stonehenge  Channel 9 Case Study on Java and Windows Azure http://www.microsoft.com/casestudies/Windows-Azure/Gigaspaces/Solution-Provider-Streamlines-Java-Application-Deployment-in-the-Cloud/400000000081   

    Read the article

  • Oracle Announces Oracle Insurance Policy Administration for Life and Annuity 9.4

    - by helen.pitts(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Today's global insurers require the ability to provide higher levels of service and quickly bring to market life insurance and annuity products that not only help them stand out from the competition, but also stay current with local legislation. To succeed, they require agile and flexible core systems that enable them to meet the unique localization requirements of the markets in which they operate, whether in North America, Asia Pacific or the Pan-European Region. The release of Oracle Insurance Policy Administration for Life and Annuity 9.4, announced today, helps insurers meet this need with expanded international market capabilities that enable them to reduce risk and profitably compete wherever their business takes them. It offers expanded multi-language along with unit-linked product and fund processing capabilities that enable regional and global insurers to rapidly configure and deliver localized products – along with providing better service for end users through a single policy admin solution. Key enhancements include: Kanji/Kana language support, pre-defined content, and imperial date processing for the Japanese market New localization flexibility for configuring and managing international mailing addresses along with regional variations for client information Enhanced capability to calculate unit-linked pricing and valuation, in addition to market-based processing and pre-configured unit linked content Expanded role-based security and masking capability to further protect sensitive customer data Enhanced capability to restrict processing specified activities based on time of day and user role, reducing exposure to market timing risks Further capability to eliminate duplicate client records, helping to reduce underwriting risks and enhance servicing through a single view of the client "The ability to leverage a single, rules-driven policy administration system for multiple global operation centers can help insurers realize significant improvements in speed to market, customer service, compliance with regional regulations, and consolidation efforts,” noted Celent's Craig Weber, senior vice president, Insurance. “We believe such initiatives are necessary to help the industry address service and distribution imperatives." Helping our customers meet these mission-critical business imperatives is a key objective for Oracle Insurance. Active, ongoing dialogue with our customers is an important part of the process to help understand how our solutions are and can continue to help them achieve success in the marketplace. I had the opportunity to meet with several of our insurance customers at the Oracle Insurance Policy Administration Client Advisory Board meeting last week in Philadelphia, Penn. (View photos on the Oracle Insurance Facebook page.)   It was a great forum for Oracle Insurance and our clients. Discussion centered on the latest business and IT trends, with opportunities to learn more about the latest release of Oracle Insurance Policy Administration for Life and Annuity and other Oracle Insurance solutions such as data warehousing / business intelligence, while exchanging best practices for product innovation and servicing customers and sales channels. Helen Pitts is senior product marketing manager for Oracle Insurance's life and annuities solutions.

    Read the article

  • 10 Best Programming Podcast 2010 Edition

    - by mbcrump
    This list is in no particular order. Just the 10 best programming podcast that I have found so far. Stack Overflow Podcast -  Jeff Atwood (of codinghorror.com) and Joel Spolsky (of joelonsoftware.com) discuss the development of their new programming community, StackOverflow.com. [This Podcast hasn’t been updated in a while, but its always great to hear more from Jeff Atwood] Hanselminutes - Hanselminutes is a weekly audio talk show with noted web developer and technologist Scott Hanselman and hosted by Carl Franklin. Scott discusses utilities and tools, gives practical how-to advice, and discusses ASP.NET or Windows issues and workarounds. [This Podcast has recently started talking about random topics like diabetes, plane travel and geek relationship tips.  I am not sure if Scott is trying to move to a more mainstream audience or not] Herding Code - A weekly discussion featuring K. Scott Allen (odetocode.com), Kevin Dente, Scott Koon (lazycoder.com), and Jon Galloway. [Great all all-around podcast that I would recommend to all] Deep Fried Bytes - Deep Fried Bytes is an audio talk show with a Southern flavor hosted by technologists and developers Keith Elder and Chris Woodruff. The show discusses a wide range of topics including application development, operating systems and technology in general. Anything is fair game if it plugs into the wall or takes a battery. [This is one that just keeps getting better] Dot Net Rocks - .NET Rocks! is an Internet Audio Talk Show for Microsoft .NET Developers. [One of the first and usually very high quality content] Connected Show - Connected Show Podcast! A podcast covering new Microsoft technology for the developer community. The show is hosted by Dmitry Lyalin and Peter Laudati. [This and Polymorphic are one of my favorite podcast – Dmitry is a great host and would recommend this to all] Polymorphic Podcast - Object oriented development, architecture and best practices in .NET [Craig is a ASP.NET MVP and a great presenter. His podcast is great and it could only be better if he recorded it more often] ASP.NET Podcast - Wallace B. (Wally) McClure presents interviews and short technical talks on .NET Technologies. [Has great information on ASP.NET of course as well as iPhone Dev] Ruby on Rails Podcast - News and interviews about the Ruby language and the Rails website framework. [Even though I am not a Ruby programmer, I’ve found this podcast very interesting] Software Engineering Radio - Software Engineering Radio is a podcast targeted at the professional software developer. The goal is to be a lasting educational resource, not a newscast. Every ten days, a new episode is published that covers all topics software engineering. Episodes are either tutorials on a specific topic, or an interview with a well-known character from the software engineering world. All SE Radio episodes are original content ? we do not record conferences or talks given in other venues. Each episode comprises two speakers to ensure a lively listening experience. SE Radio is an independent and non-commercial organization. [Another excellent podcast – I would recommend any programmer add this to his/her drive home] If I have missed something, please feel free to email me and it might make the 2011 list. =)

    Read the article

  • DeveloperDeveloperDeveloper! Scotland 2010 - DDDSCOT

    - by Plip
    DDD in Scotland was held on the 8th May 2010 in Glasgow and I was there, not as is uaual at these kind of things as an organiser but actually as a speaker and delegate. The weekend started for me back on Thursday with the arrival of Dave Sussman to my place in Lancashire, after a curry and watching the Electon night TV coverage we retired to our respective beds (yes, I know, I hate to shatter the illusion we both sleep in the same bed wearing matching pijamas is something I've shattered now) ready for the drive up to Glasgow the following afternoon. Before heading up to Glasgow we had to pick up Young Mr Hardy from Wigan then we began the four hour drive back in time... Something that struck me on the journey up is just how beautiful Scotland is. The menacing landscapes bordered with fluffy sheep and whirly-ma-gigs are awe inspiring - well worth driving up if you ever get the chance. Anywho we arrived in Glasgow, got settled intot he hotel and went in search of Speakers for pre conference drinks and food. We discovered a gaggle (I believe that's the collective term) of speakers in the Bar and when we reached critical mass headed off to the Speakers Dinner location. During dinner, SOMEONE set my hair on FIRE. That's all I'm going to say on the matter. Whilst I was enjoying my evening there was something nagging at me, I realised that I should really write my session as I was due to give it the following morning. So after a few more drinks I headed back to the hotel and got some well earned sleep (and washed the fire damage out of my hair). Next day, headed off to the conference which was a lovely stroll through Glasgow City Centre. Non of us got mugged, murdered (or set on fire) arriving safely at the venue, which was a bonus.   I was asked to read out the opening Slides for Barry Carr's session which I did dilligently and with such professionalism that I shocked even myself. At which point I reliased in just over an hour I had to give my presentation, so headed back to the speaker room to finish writing it. Wham, bam and it was all over. Session seemed to go well. I was speaking on Exception Driven Development, which isn't so much a technical solution but rather a mindset around how one should treat exceptions and their code. To be honest, I've not been so nervous giving a session for years - something about this topic worried me, I was concerned I was being too abstract in my thinking or that what I was saying was so obvious that everyone would know it, but it seems to have been well recieved which makes me a happy Speaker. Craig Murphy has some brilliant pictures of DDD Scotland 2010. After my session was done I grabbed some lunch and headed back to the hotel and into town to do some shopping (thus my conspicuous omission from the above photo). Later on we headed out to the geek dinner which again was a rum affair followed by a few drinks and a little boogie woogie. All in all a well run, well attended conference, by the community for the community. I tip my hat to the whole team who put on DDD Scotland!       

    Read the article

  • expected identity upn connecting to service as network service,

    - by Jim
    Hi, We have a web application, running in an application pool as 'NETWORK SERVICE'. The web application connects to a service (.svc) on another web server. The other web server also has the service hosted as 'NETWORK SERVICE'. I believe this is the default. The following endpoint, when run anywhere else works perfectly. <endpoint address="http://server123/UnitTrustService/UnitTrustService.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_UnitTrustService" contract="UnitTrustServiceReference.UnitTrustService" name="WSHttpBinding_UnitTrustService"> <identity> <servicePrincipalName value="server123" /> </identity> </endpoint> Unfortunately when executed from the web site, we get the following error. System.ServiceModel.Security.MessageSecurityException: The identity check failed for the outgoing message. The expected identity is 'identity(http://schemas.xmlsoap.org/ws/2005/05/identity/right/possessproperty: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn)' for the 'http://server123/UnitTrustService/UnitTrustService.svc' target endpoint. Server stack trace: at System.ServiceModel.Security.IdentityVerifier.EnsureIdentity(EndpointAddress serviceReference, AuthorizationContext authorizationContext, String errorString)... Any ideas? I have tried running this as local system on the web server machine with exactly the same configuration and it works perfectly. It has something to do with IIS? Regards Craig.

    Read the article

  • Represent multiple Null/Generic objects in an ActiveRecord association?

    - by slothbear
    I have a Casefile model that belongs_to a Doctor. In additional to all the "real" doctors, there are several generic Doctors: "self-treated", "not specified", and "removed" (it used to have a real doctor, but no longer does). I suspect there will be even more generic values in the future. I started with special "doctors" in the database, generated from seed. The generic Doctors only need to respond to the "name" and "real_doctor?" methods. This worked with one, was strained with two, and now feels completely broken. I want to change the behavior and can't figure out how to test it, a bad sign. Creating all the generic objects for testing is also trouble, including fake values to pass validation of the required Doctor attributes. The Null Object pattern works well for one generic object. The "name" method could check casefile.doctor.nil? and return "self-treated", as demonstrated by Craig Ambrose. What pattern should I use when there are multiple generic objects with very limited state?

    Read the article

  • calll html button onclick event from asp server side login authenticate event

    - by CraigJSte
    Need to programmatically click an html button from a login event (code behind? the html button sends variables to Flash using method: no response - with no postback and uses ExternalInterface API via javascript. Going from SWF ASPX is great, but need to send User.Identity to SWF from ASPX via javascript after authenticate with login event which am having impossible time getting to work... (calling HTML event from Login button) tried scripting in javascript to login event with no luck, possibly because postback clears SWF variables - so perhaps keeping separate (login then html send) would work... Here is my relevant code: function sendToActionScript(value) { swfobject.getObjectById("Property").sendToActionScript(value); } </script> <object ..// SWF File embedded> </object <form id="form1" runat="server"> <asp:Login id="login1" OnAuthenticate="login1_Authenticate"/> </form> <form id="form" onsubmit="return false;"> <input type="text" name="input" id="input" value="" runat="server" /> <button id="btnInput" runat="server" causesvalidation="false" visible="true" style="width: 51px" onclick="sendToActionScript(this.form.input.value);" >Send</button><br /> </form> // CODE BEHIND protected void Login1_Authenticate(object sender, AuthenticateEventArgs e) { // do something to get User Id and Role //bind the string (user or role) to input.value //then call the HTML button onclick event to send it to SWF file. //which I could put in separate function and call from Login_Authenticate } Can anyone help me I am out of ideas. Craig

    Read the article

  • PDO Database Connections Problem

    - by Metropolis
    Hey Everyone, Over a year ago I created my own database classes which use PDO, and handle all preparing, executing, and closing connections. These classes have been working great up until now. There are two different database severs I am grabbing from, MySQL, and MS SQL Express. I am retrieving an employee id from the MySQL server and using it to get that employees information from the MS SQL server. There are about 11k records coming from the MySQL server and my program is only making it through 1200 before crashing with an error like the following. Connection failed (odbc:Driver=FreeTDS;Servername=MSSQLExpress;Database=SMDINC) Class (PDOException) SQLSTATE[08001] SQLDriverConnect: 0 [unixODBC][FreeTDS][SQL Server]Unable to connect to data source It seems like the program is not able to connect to the data source, but it is running the exact same query about 30 times before this and having no problem. Also, I have thoroughly checked all of the data coming into the query and it all looks fine. I believe the issue may be that there are to many connections being created, but I have tried to close all connections in many different places, and nothing seems to be fixing the problem. Any debugging help, or suggestions would be appreciated! Craig Metrolis

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31  | Next Page >