Search Results

Search found 28300 results on 1132 pages for 'david back'.

Page 543/1132 | < Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >

  • Exchange 2007 ignoring Send Connectors (again)

    - by gravyface
    Wow, I'm at a loss here -- I posted this exact same question a while back and it's doing the exact same thing: my Send Connector I've created for "Microsoft Domains" (hotmail.com cost 1) is being ignored again and routed through the "Default" Send Connector (* cost 10). Last time, I had the same cost for both Send Connectors, but I've tried setting the Default connector to 5, 10, 100, etc. and regardless, all mail gets routed through that connector (which smarthosts through Postini). Besides calling an air strike on Redmond, what else can I do? MS is blocking Postini again, need to get this working permanently.

    Read the article

  • Silverlight Cream for November 21, 2011 -- #1171

    - by Dave Campbell
    In this Issue: Colin Eberhardt, Sumit Dutta, Morten Nielsen, Jesse Liberty, Jeff Blankenburg(-2-), Brian Noyes, and Tony Champion. Above the Fold: Silverlight: "PV Basics : Client-side Collections" Tony Champion WP7: "Pushpin Clustering with the Windows Phone 7 Bing Map control" Colin Eberhardt Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up From SilverlightCream.com: Pushpin Clustering with the Windows Phone 7 Bing Map control Colin Eberhardt is back discussing Pushpins for a BingMaps app on WP7 and provides a utility class that clusters pushpins, allowing you to render 1000s of pins on an app ... all the explanation and all the code Part 22 - Windows Phone 7 - Tile Push Notification Part 22 in Sumit Dutta's WP7 series is about Tile Push Notification... nice tutorial with all the code listed Correctly displaying your current location Morten Nielsen demonstrates formatting the information from the GPS on your WP7 into something intelligible and useful Spiking the Pomodoro Timer Jesse Liberty put up a quick and dirty version of a Pomodoro timer for WP7.1 to explore the technical challenges of the Full Stack Phase 2 he's cranking up 31 Days of Mango | Day #11: Network Jeff Blankenburg's Number 10 in his 31 Days quest of WP7.1 is about the NetworkInformation namespace which gives you all sorts of info on the user's device network connection availability, type, etc. 31 Days of Mango | Day #11: Live Tiles Jeff Blankenburg takes off on Live Tiles for Day 11... big topic for a 1 day post, but he takes off on it... updating and Live Tiles too Working with Prism 4 Part 2: MVVM Basics and Commands Brian Noyes has part 2 of his Prism/MVVM series up at SilverlightShow... very nice tutorial on the basics of getting a view and viewmodel up, and setting up an ICommand to launch an Edit View... plus the code to peruse. PV Basics : Client-side Collections Tony Champion is startig a series on Silverlight 5 and Pivot Viewer... First up is some basics in dealing with the control in SL5 and talking about Client-side Collections... great informative tutorial and all the code Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Apache default NameVirtualHost does not work

    - by Luc
    Hello, I have 4 Name Virtual Hosts on my apache configuration, each one using proxy_http to forward request to the correct server. They work fine. <VirtualHost *:80> ServerName application_name.domain.tld ProxyRequests Off ProxyPreserveHost On ProxyPass / http://server_ip/ ProxyPassReverse / http://server_ip/ </VirtualHost> I then tried to add a default NameVirtualHost to take care of the requests for which the server name does not match one of the four others. Otherwise a request like some_weird_styff.domain.tld would be forwarded to one of the 4 VH. I then added this one: <VirtualHost *:80> ServerAlias "*" DocumentRoot /var/www/ </VirtualHost> At the beginning it seemed to work fine, but at some point it appears that the requests that should be handed by one of the 4 regular hosts is "eaten" by the default one !!! If I a2dissite this default host, everything is back to normal... I do not really understand this. If you have any clue... thanks a lot, Luc

    Read the article

  • Windows 7 / ATI CCC - Monitor limit?

    - by james.ingham
    Hi all, I have a Ati 4850 X2 and back in the days when Windows 7 was in beta, I had this running with 4 monitors (Crossfire off). Now I have come to set this up again, with a RTM version of Windows 7 and the latest ATI drivers and it seems I can only have two monitors on at one time. If I try and set the monitors up in Control Panel, it simply disables the non-primary monitor, and if I attempt to do it in CCC, I get the message "To extend the desktop, a desktop or display must be disabled." Does anyone know why this is and/or how to fix it? I've currently tried using the latest ATI drivers and the drivers (i think) I was using over a year ago when I had this setup. Thanks - James

    Read the article

  • How to make sysctl network bridge settings persist after a reboot?

    - by Zack Perry
    I am setting up a notebook for software demo purpose. The machine has 8GB RAM, a Core i7 Intel CPU, a 128GB SSD, and runs Ubuntu 12.04 LTS 64bit. The notebook is used as a KVM host and runs a few KVM guests. All such guests use the virbr0 default bridge. To enable them to communicate with each other using multicast, I added the following to the host's /etc/sysctl.conf, as shown below net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 Afterwards, following man sysctl(8), I issued the following: sudo /sbin/sysctl -p /etc/sysctl.conf My understanding is that this should make these settings persist over reboots. I tested it, and was surprised to find out the following: root@sdn1 :/proc/sys/net/bridge# more *tables :::::::::::::: bridge-nf-call-arptables :::::::::::::: 1 :::::::::::::: bridge-nf-call-ip6tables :::::::::::::: 1 :::::::::::::: bridge-nf-call-iptables :::::::::::::: 1 All defaults are coming back! Yes. I can use some kludgy "get arounds" such as putting a /sbin/sysctl -p /etc/sysctl.conf into the host's /etc/rc.local but I would rather "do it right". Did I misunderstand the man page or is there something that I missed? Thanks for any hints. -- Zack

    Read the article

  • Sharepoint Search crawl not working

    - by Satish
    Search Crawling is error out on my MOSS 2007 installation. I get the following error for all the web apps I have following error in Crawl logs. http://mysites.devserver URL could not be resolved. The host may be unavailable, or the proxy settings are not configured correctly on the index server. The Application Event log also has the following corresponding error The start address http://mysites.devserver cannot be crawled. Context: Application 'SSPMain', Catalog 'Portal_Content' Details: The URL of the item could not be resolved. The repository might be unavailable, or the crawler proxy settings are not configured. To configure the crawler proxy settings, use the Proxy and Timeout page in search administration. (0x80041221) I'm using Windows 2008 server. I tried accessing the site using the above mentioned url and its available. I did the registry setting for loop back issue found here http://support.microsoft.com/kb/896861 still not luck. Any Ideas?

    Read the article

  • Twitter User/Search Feature Header Support in LINQ to Twitter

    - by Joe Mayo
    LINQ to Twitter’s goal is to support the entire Twitter API. So, if you see a new feature pop-up, it will be in-queue for inclusion. The same holds for the new X-Feature… response headers for User/Search requests.  However, you don’t have to wait for a special property on the TwitterContext to access these headers, you can just use them via the TwitterContext.ResponseHeaders collection. The following code demonstrates how to access the new X-Feature… headers with LINQ to Twitter: var user = (from usr in twitterCtx.User where usr.Type == UserType.Search && usr.Query == "Joe Mayo" select usr) .FirstOrDefault(); Console.WriteLine( "X-FeatureRateLimit-Limit: {0}\n" + "X-FeatureRateLimit-Remaining: {1}\n" + "X-FeatureRateLimit-Reset: {2}\n" + "X-FeatureRateLimit-Class: {3}\n", twitterCtx.ResponseHeaders["X-FeatureRateLimit-Limit"], twitterCtx.ResponseHeaders["X-FeatureRateLimit-Remaining"], twitterCtx.ResponseHeaders["X-FeatureRateLimit-Reset"], twitterCtx.ResponseHeaders["X-FeatureRateLimit-Class"]); The query above is from the User entity, whose type is Search; allowing you to search for the Twitter user whose name is specified by the Query parameter filter. After materializing the query, with FirstOrDefault, twitterCtx will hold all of the headers, including X-Feature… that Twitter returned.  Running the code above will display results similar to the following: X-FeatureRateLimit-Limit: 60 X-FeatureRateLimit-Remaining: 59 X-FeatureRateLimit-Reset: 1271452177 X-FeatureRateLimit-Class: namesearch In addition to getting the X-Feature… headers a capability you might have noticed is that the TwitterContext.ResponseHeaders collection will contain any HTTP that Twitter sends back to a query. Therefore, you’ll be able to access new Twitter headers anytime in the future with LINQ to Twitter. @JoeMayo

    Read the article

  • ISA forms authentication problems after installing moss sp2

    - by user22215
    Guys I have a problem that's flared back up after installing WSS and MOSS service pack 2. The problem centers around the users being prompted to enter credentials when interacting with office documents. This problem came up before and I was able to go into ISA server and configure a persistent cookie on the web listener. As we all know when configuring a cookie you have two options use only on private computers or use on all computers. If I select use on all computers I can't even log in to Sharepoint from the forms page however if I select use only on private computer I'm able to log in and also I don't get prompted when opening office documents. So I would like to ask has something changed with Sharepoint service pack 2 because that’s the only change that’s been made to my environment.

    Read the article

  • Formatting Keywords to UPPERCASE In Oracle SQL Developer

    - by thatjeffsmith
    I received this question from a customer today, and it took me more than a few minutes to remember where this preference was located in SQL Developer. This tells me that the topic is ripe for blogging How do I go FROM: select * from scott.emp where ename like '%JEFF%' TO SELECT * FROM scott.emp WHERE ename LIKE '%JEFF%' It’s all in the formatting You need to access the formatting preferences under the Tools menu. It takes a bit of navigating to get there, so bear with me: Tools Database SQL Formatter Oracle Formatting Click ‘Edit’ on the profile Other Case change: ‘Keywords Uppercase’ It’s easy to find once you know where to look? You can tell it to leave the case alone, upper everything, upper only the keywords, lower everything. Accessing the Formatter Options We allow separate formatting options for different RDBMS. You need to make sure you’re accessing the ‘Oracle Formatting’ page in the preferences. You can then choose to edit the default options OR you can do what I have done – save the defaults as a new set of options. I’ve called my profile ‘JeffCustom.’ I can now switch back and forth now through different sets of formatting options. You need to hit the ‘Edit’ button to get to the formatting options editor. A good number of people seem to miss this. Select your profile, then hit the ‘Edit’ button

    Read the article

  • exporting bind and keyframe bone poses from blender to use in OpenGL

    - by SaldaVonSchwartz
    I'm having a hard time trying to understand how exactly Blender's concept of bone transforms maps to the usual math of skinning (which I'm implementing in an OpenGL-based engine of sorts). Or I'm missing out something in the math.. It's gonna be long, but here's as much background as I can think of. First, a few notes and assumptions: I'm using column-major order and multiply from right to left. So for instance, vertex v transformed by matrix A and then further transformed by matrix B would be: v' = BAv. This also means whenever I export a matrix from blender through python, I export it (in text format) in 4 lines, each representing a column. This is so I can then I can read them back into my engine like this: if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[0], &skeleton.joints[currentJointIndex].inverseBindTransform.m[1], &skeleton.joints[currentJointIndex].inverseBindTransform.m[2], &skeleton.joints[currentJointIndex].inverseBindTransform.m[3])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[4], &skeleton.joints[currentJointIndex].inverseBindTransform.m[5], &skeleton.joints[currentJointIndex].inverseBindTransform.m[6], &skeleton.joints[currentJointIndex].inverseBindTransform.m[7])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[8], &skeleton.joints[currentJointIndex].inverseBindTransform.m[9], &skeleton.joints[currentJointIndex].inverseBindTransform.m[10], &skeleton.joints[currentJointIndex].inverseBindTransform.m[11])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[12], &skeleton.joints[currentJointIndex].inverseBindTransform.m[13], &skeleton.joints[currentJointIndex].inverseBindTransform.m[14], &skeleton.joints[currentJointIndex].inverseBindTransform.m[15])) { I'm simplifying the code I show because otherwise it would make things unnecessarily harder (in the context of my question) to explain / follow. Please refrain from making remarks related to optimizations. This is not final code. Having said that, if I understand correctly, the basic idea of skinning/animation is: I have a a mesh made up of vertices I have the mesh model-world transform W I have my joints, which are really just transforms from each joint's space to its parent's space. I'll call these transforms Bj meaning matrix which takes from joint j's bind pose to joint j-1's bind pose. For each of these, I actually import their inverse to the engine, Bj^-1. I have keyframes each containing a set of current poses Cj for each joint J. These are initially imported to my engine in TQS format but after (S)LERPING them I compose them into Cj matrices which are equivalent to the Bjs (not the Bj^-1 ones) only that for the current spacial configurations of each joint at that frame. Given the above, the "skeletal animation algorithm is" On each frame: check how much time has elpased and compute the resulting current time in the animation, from 0 meaning frame 0 to 1, meaning the end of the animation. (Oh and I'm looping forever so the time is mod(total duration)) for each joint: 1 -calculate its world inverse bind pose, that is Bj_w^-1 = Bj^-1 Bj-1^-1 ... B0^-1 2 -use the current animation time to LERP the componets of the TQS and come up with an interpolated current pose matrix Cj which should transform from the joints current configuration space to world space. Similar to what I did to get the world version of the inverse bind poses, I come up with the joint's world current pose, Cj_w = C0 C1 ... Cj 3 -now that I have world versions of Bj and Cj, I store this joint's world- skinning matrix K_wj = Cj_w Bj_w^-1. The above is roughly implemented like so: - (void)update:(NSTimeInterval)elapsedTime { static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.jointPoses[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.jointPoses[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4Multiply(currentTransform, GLKMatrix4MakeTranslation(LERPTranslation.x, LERPTranslation.y, LERPTranslation.z)); currentTransform = GLKMatrix4Multiply(currentTransform, GLKMatrix4MakeScale(LERPScaling.x, LERPScaling.y, LERPScaling.z)); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = aSkeleton.joints[i].inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(aSkeleton.joints[i].inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } } At this point, I should have my skinning palette. So on each frame in my vertex shader, I do: uniform mat4 modelMatrix; uniform mat4 projectionMatrix; uniform mat3 normalMatrix; uniform mat4 skinningPalette[6]; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { vec3 eyeNormal = normalize(normalMatrix * normal); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } gl_Position = projectionMatrix * modelMatrix * skinnedVertexPosition; } The result: The mesh parts that are supposed to animate do animate and follow the expected motion, however, the rotations are messed up in terms of orientations. That is, the mesh is not translated somewhere else or scaled in any way, but the orientations of rotations seem to be off. So a few observations: In the above shader notice I actually did not multiply the vertices by the mesh modelMatrix (the one which would take them to model or world or global space, whichever you prefer, since there is no parent to the mesh itself other than "the world") until after skinning. This is contrary to what I implied in the theory: if my skinning matrix takes vertices from model to joint and back to model space, I'd think the vertices should already be premultiplied by the mesh transform. But if I do so, I just get a black screen. As far as exporting the joints from Blender, my python script exports for each armature bone in bind pose, it's matrix in this way: def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: poseJoint = skeleton.pose.bones[joint.name] jointTransform = poseJoint.matrix.inverted() file.write('Joint ' + joint.name + ' Transform {\n') for col in jointTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') And for current / keyframe poses (assuming I'm in the right keyframe): def exportAnimations(filepath): # Only one skeleton per scene objList = [object for object in bpy.context.scene.objects if object.type == 'ARMATURE'] if len(objList) == 0: return elif len(objList) > 1: return #raise exception? dialog box? skeleton = objList[0] jointNames = [bone.name for bone in skeleton.data.bones] for action in bpy.data.actions: # One animation clip per action in Blender, named as the action animationClipFilePath = filepath[0 : filepath.rindex('/') + 1] + action.name + ".aClip" file = open(animationClipFilePath, 'w') file.write('target skeleton: ' + skeleton.name + '\n') file.write('joints count: {:d}'.format(len(jointNames)) + '\n') skeleton.animation_data.action = action keyframeNum = max([len(fcurve.keyframe_points) for fcurve in action.fcurves]) keyframes = [] for fcurve in action.fcurves: for keyframe in fcurve.keyframe_points: keyframes.append(keyframe.co[0]) keyframes = set(keyframes) keyframes = [kf for kf in keyframes] keyframes.sort() file.write('keyframes count: {:d}'.format(len(keyframes)) + '\n') for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) joint = skeleton.pose.bones[i] jointCurrentPoseTransform = joint.matrix translationV = jointCurrentPoseTransform.to_translation() rotationQ = jointCurrentPoseTransform.to_3x3().to_quaternion() scaleV = jointCurrentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') file.close() Which I believe follow the theory explained at the beginning of my question. But then I checked out Blender's directX .x exporter for reference.. and what threw me off was that in the .x script they are exporting bind poses like so (transcribed using the same variable names I used so you can compare): if joint.parent: jointTransform = poseJoint.parent.matrix.inverted() else: jointTransform = Matrix() jointTransform *= poseJoint.matrix and exporting current keyframe poses like this: if joint.parent: jointCurrentPoseTransform = joint.parent.matrix.inverted() else: jointCurrentPoseTransform = Matrix() jointCurrentPoseTransform *= joint.matrix why are they using the parent's transform instead of the joint in question's? isn't the join transform assumed to exist in the context of a parent transform since after all it transforms from this joint's space to its parent's? Why are they concatenating in the same order for both bind poses and keyframe poses? If these two are then supposed to be concatenated with each other to cancel out the change of basis? Anyway, any ideas are appreciated.

    Read the article

  • What makes a Software Craftsman?

    - by Liam McLennan
    At the end of my visit to 8th Light Justin Martin was kind enough to give me a ride to the train station; for my train back to O’Hare. Just before he left he asked me an interesting question which I then posted to twitter: Liam McLennan: . @JustinMartinM asked what I think is the most important attributes of craftsmen. I said, "desire to learn and humility". What's yours? 6:25 AM Apr 17th via TweetDeck several people replied with excellent contributions: Alex Hung: @liammclennan I think kaizen sums up craftmanship pretty well, which is almost same as yours Steve Bohlen: @alexhung @liammclennan those are both all about saying "knowing what you don't know and not being afraid to go learn it" (and I agree!) Matt Roman: @liammclennan @JustinMartinM a tempered compulsion for constant improvement, and an awareness of what needs improving. Justin Martin: @mattroman @liammclennan a faculty for asking challenging questions, and a persistence to battle through difficult obstacles barring growth I thought this was an interesting conversation, and I would love to see other people contribute their opinions. My observation is that Alex, Steve, Matt and I seem to have essentially the same answer in different words. It is also interesting to note (as Alex pointed out) that these definitions are very similar to Alt.NET and the lean concept of kaizen.

    Read the article

  • Advise on career development [closed]

    - by Mike Young
    I am an amateur programmer working at a start-up. I didn't try coding at college. I've been working for 2 months now on web development. I'm satisfied with my progress. My project will go live soon. I work on front-end and my colleague integrates my work in his. So I decided to learn back-end technologies so that I would be able to work on a project from scratch, help my company build up. I recently got to know about the technologies used by fb and was fascinated to learn ,work on them,keep motivating myself. Now I want to work on building a product from scratch, be good at database concepts, a language like ruby or python, and get to know load balancing, dynamic requests from servers, hosting a website, real time communication, secured login, implementing sophisticated search feature for the app, using git by the end of the project.I would like to be a full stack developer in due course of time and learn everything in detail. I decide to keep myself out of time frame, learn every concept in detail.I would like to use both rdbms and non relational dbm for the project. I have no experience except some beginner knowledge in html5,css and JavaScript. I would like to get some advice on how to proceed forward step by step,flow what technologies to pick up and project idea which includes all the above.

    Read the article

  • Unable to see External HDD in Windows Explorer

    - by Jamie Keeling
    I have bought a 320GB External HDD which I want to use with my Playstation3. I know it will only work with a Fat32 file system so using some free HP software it formatted it, unbeknown to me it will only work up to 32GB. After seeing this I panicked, downloaded Partition Wizard Home Edition and deleted the partition. As I was about to create a new partition to put it back to NTFS (I'd just wanted to be able to use it in the first place at this point) I accidently knocked the cable out of my computer for the HDD and after replacing it the External HDD is no longer recognised by the My Computer option, Disk Management asks me to initialise the disc using MBR but it fails saying "Copy protected". Even the partitioning software I previously mentioned can't do anything about it, all it says is "Bad Condition" and I can't perform any operations on it. Would anybody be able to guide me in getting this sorted? I'm terrified i've wasted a perfectly good 320GB HDD.

    Read the article

  • Merging Waterfall and Agile – Getting the Worst of Both Worlds

    - by Nick Harrison
    Many people have seen and appreciate the elegance and practicality of agile methodologies.   Sadly there is still not widespread adoption.   There is still push back from many directions and from many different sources.   Some people don't understand how it is supposed to work. Some people don't believe that it could possibly work. Some people mistakenly believe that it is just code for a lazy project team trying to wiggle out of structure Some people mistakenly believe that it can work only with a very small highly trained team Some people are afraid of the control that they feel they will be losing. I have seen some people try to merge agile and water fall hoping to achieve the best of both worlds.   Unfortunately, the reality is that you end up with the worst of both worlds.   And they both can get pretty bad. Another Sad Reality Some people in an effort to get buy in for following an Agile Methodology have attempted to merge these two practices.   Sometimes this may stem from trying to assuage individual fears that they are not losing relevance.   Sometimes it may be to meet contractual obligations or to fulfill regulatory requirements.   Sometimes may not know better. These two approaches to software development cannot coexist on the same project. Let's review the main tenants of the Agile Manifesto: Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan That is, while there is value in the items on the right, we value the items on the left more. Meanwhile the main tenants of the Waterfall Approach could be summarized as: Processes and procedures over individuals Comprehensive documentation proves that the software works Well defined contracts and negotiations protects the customer relationship If the plan is made right, there should be no change  Merging these two approaches will always end badly.

    Read the article

  • Redistribution of sqlpackage.exe [SSDT]

    - by jamiet
    This is a short note for anyone that may be interested in redistributing sqlpackage.exe. If this isn’t you then no need to keep reading. Ostensibly this is here for anyone that bingles for this information. sqlpackage.exe is a command-line that ships with SQL Server Development Tools (SSDT) in SQL Server 2012 and its main purpose (amongst other things) is to deploy .dacpac files from the command-line. Its quite conceivable that one might want to install only sqlpackage.exe rather than the full SSDT suite (for example on a production server) and I myself have recently had that need. I enquired to the SSDT product team about the possibility of doing this. I said: Back in VS DB Proj days it was possible to use VSDBCMD.exe on a machine that did not have the full VS shell install by shipping lots of pre-requisites along for the ride (details at How to: Prepare a Database for Deployment From a Command Prompt by Using VSDBCMD.EXE). Is there a similar mechanism for using VSDBMCD.exe’s replacement, sqlpackage.exe? here was the reply from Barclay Hill who heads up the development team: Yes, SQLPackage.exe is the analogy of VSDBCMD.exe. You can acquire separately, in a stand-alone package, by installing DACFX. You can get it from: Feature pack is here: http://www.microsoft.com/en-us/download/details.aspx?id=29065 Web Platform Installer here: http://www.microsoft.com/web/gallery/install.aspx?appid=DACFX You will notice it has dependencies on SQLDOM and SQLCLRTYPES.  WebPI will install these for you, but it is al carte on the feature pack. So, now you know. I didn’t enquire about licensing of DACFX but given SSDT is free I am going to assume that the same applies to DACFX too. @Jamiet

    Read the article

  • Project Euler 51: Ruby

    - by Ben Griswold
    In my attempt to learn Ruby out in the open, here’s my solution for Project Euler Problem 51.  I know I started back up with Python this week, but I have three more Ruby solutions in the hopper and I wanted to share. For the record, Project Euler 51 was the second hardest Euler problem for me thus far. Yeah. As always, any feedback is welcome. # Euler 51 # http://projecteuler.net/index.php?section=problems&id=51 # By replacing the 1st digit of *3, it turns out that six # of the nine possible values: 13, 23, 43, 53, 73, and 83, # are all prime. # # By replacing the 3rd and 4th digits of 56**3 with the # same digit, this 5-digit number is the first example # having seven primes among the ten generated numbers, # yielding the family: 56003, 56113, 56333, 56443, # 56663, 56773, and 56993. Consequently 56003, being the # first member of this family, is the smallest prime with # this property. # # Find the smallest prime which, by replacing part of the # number (not necessarily adjacent digits) with the same # digit, is part of an eight prime value family. timer_start = Time.now require 'mathn' def eight_prime_family(prime) 0.upto(9) do |repeating_number| # Assume mask of 3 or more repeating numbers if prime.count(repeating_number.to_s) >= 3 ctr = 1 (repeating_number + 1).upto(9) do |replacement_number| family_candidate = prime.gsub(repeating_number.to_s, replacement_number.to_s) ctr += 1 if (family_candidate.to_i).prime? end return true if ctr >= 8 end end false end # Wanted to loop through primes using Prime.each # but it took too long to get to the starting value. n = 9999 while n += 2 next if !n.prime? break if eight_prime_family(n.to_s) end puts n puts "Elapsed Time: #{(Time.now - timer_start)*1000} milliseconds"

    Read the article

  • SYS2 Scripts Updated – Scripts to monitor database backup, database space usage and memory grants now available

    - by Davide Mauri
    I’ve just released three new scripts of my “sys2” script collection that can be found on CodePlex: Project Page: http://sys2dmvs.codeplex.com/ Source Code Download: http://sys2dmvs.codeplex.com/SourceControl/changeset/view/57732 The three new scripts are the following sys2.database_backup_info.sql sys2.query_memory_grants.sql sys2.stp_get_databases_space_used_info.sql Here’s some more details: database_backup_info This script has been made to quickly check if and when backup was done. It will report the last full, differential and log backup date and time for each database. Along with these information you’ll also get some additional metadata that shows if a database is a read-only database and its recovery model: By default it will check only the last seven days, but you can change this value just specifying how many days back you want to check. To analyze the last seven days, and list only the database with FULL recovery model without a log backup select * from sys2.databases_backup_info(default) where recovery_model = 3 and log_backup = 0 To analyze the last fifteen days, and list only the database with FULL recovery model with a differential backup select * from sys2.databases_backup_info(15) where recovery_model = 3 and diff_backup = 1 I just love this script, I use it every time I need to check that backups are not too old and that t-log backup are correctly scheduled. query_memory_grants This is just a wrapper around sys.dm_exec_query_memory_grants that enriches the default result set with the text of the query for which memory has been granted or is waiting for a memory grant and, optionally, its execution plan stp_get_databases_space_used_info This is a stored procedure that list all the available databases and for each one the overall size, the used space within that size, the maximum size it may reach and the auto grow options. This is another script I use every day in order to be able to monitor, track and forecast database space usage. As usual feedbacks and suggestions are more than welcome!

    Read the article

  • Restoring Time Machine from two Macs onto one (new) mac

    - by Dan
    My parents used to have two Macs...a "iLamp-style" iMac for my Dad, and an iBook G4 for my Mom. A while back, I had setup the iMac to have an external Firewire Hard Drive for a Time Machine Volume, and backed up both the iMac and iBook to that drive. Recently, the iBook died and the iMac was really slow to work with. So my parents decided to replace the iBook with an iPad, and also purchased a Mac Mini. I need to help my parents get their data from their two computers (backed up by Time Machine) onto the same machine. Pretty much everything is identical between the two systems (same apps, etc), however, they both have individual email accounts and photos that they want to retain. Is it possible to do two Time Machine restores onto one computer?

    Read the article

  • Linux Email Server Auto-Reply

    - by Robert Smith
    I need to setup a mail server that has the following functionality: if a user sends an email to a specific address on this server, the server must first check if the email has a PDF attachment, do some processing to that PDF file and then reply to the user's initial mail with the new PDF file attached. My question is how would it be possible to achieve this functionality, and what software / mail server do you recommend? I'm thinking that it can be solved the following way: when the server receives a new email it executes an external Python script that checks the attachment, processes the PDF file and then sends it back in the user's mailbox. What mail server would be able to do this, and what configurations does it need?

    Read the article

  • Windows 8 Resource Hacking encrypted UIFILE's

    - by Evan Purkhiser
    In Windows 7 it was quite easy to modify resource files such as DLLs to make changes to the Windows interface. This was done by editing the UIFILE's using a program like Resource Hacker. However, now in Windows 8 it seems like some of the UIFILE's are encrypted and are no longer in plain text form and editable in Resource Hacker like they were before. Here's an example screenshot of what they look like now: I was hoping someone might be able to provide some insight into how I can decode this into the UIFILE XML and also compile it back into the DLL.

    Read the article

  • Fixing my SQL Directory NTFS ACLS

    - by Shawn Cicoria
    I run my development server by boot to VHD (Windows Server 2008 R2 x64).  In that instance, I also have an attached VHD (I attach via script at boot up time using Task Scheduler).  That VHD I have my SQL instances installed. So, the other day, acting hasty, I chmod my ACLS – wow, what a day after that. So, in order to fix it I created this set of BAT commands that resets it back to operational state – not 100% of all what you get, I also didn’t want to run a “repair” – but, all operational again. setlocal SET Inst100Path=H:\Program Files\Microsoft SQL Server\100 REM GOTO SQLE SET InstanceName=MSSQLSERVER SET InstIdPath=H:\Program Files\Microsoft SQL Server\MSSQL10.%InstanceName% SET Group=SQLServerMSSQLUser$SCICORIA-HV1$%InstanceName% SET AgentGroup=SQLServerSQLAgentUser$SCICORIA-HV1$%InstanceName% ICACLS "%InstIdPath%\MSSQL" /T /Q /grant "%Group%":(OI)(CI)FX ICACLS "%InstIdPath%\MSSQL\backup" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\data" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\FTdata" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\Jobs" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\binn" /T /Q /grant "%Group%":(OI)(CI)RX ICACLS "%InstIdPath%\MSSQL\Log" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%Inst100Path%" /T /Q /grant "%Group%":(OI)(CI)RX ICACLS "%Inst100Path%\shared\Errordumps" /T /Q /grant "%Group%":(OI)(CI)RXW ICACLS "%InstIdPath%\MSSQL" /T /Q /grant "%AgentGroup%":(OI)(CI)RX ICACLS "%InstIdPath%\MSSQL\binn" /T /Q /grant "%AgentGroup%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\Log" /T /Q /grant "%AgentGroup%":(OI)(CI)F ICACLS "%Inst100Path%" /T /Q /grant "%AgentGroup%":(OI)(CI)RX REM THIS IS THE SQL EXPRESS INSTANCE :SQLE SET InstanceName=SQLEXPRESS SET InstIdPath=H:\Program Files\Microsoft SQL Server\MSSQL10.%InstanceName% SET Group=SQLServerMSSQLUser$SCICORIA-HV1$%InstanceName% SET AgentGroup=SQLServerSQLAgentUser$SCICORIA-HV1$%InstanceName% ICACLS "%InstIdPath%\MSSQL" /T /Q /grant "%Group%":(OI)(CI)FX ICACLS "%InstIdPath%\MSSQL\backup" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\data" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\FTdata" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\Jobs" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\binn" /T /Q /grant "%Group%":(OI)(CI)RX ICACLS "%InstIdPath%\MSSQL\Log" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%Inst100Path%" /T /Q /grant "%Group%":(OI)(CI)RX ICACLS "%Inst100Path%\shared\Errordumps" /T /Q /grant "%Group%":(OI)(CI)RXW ICACLS "%InstIdPath%\MSSQL" /T /Q /grant "%AgentGroup%":(OI)(CI)RX ICACLS "%InstIdPath%\MSSQL\binn" /T /Q /grant "%AgentGroup%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\Log" /T /Q /grant "%AgentGroup%":(OI)(CI)F ICACLS "%Inst100Path%" /T /Q /grant "%AgentGroup%":(OI)(CI)RX endlocal

    Read the article

  • The last MVVM you'll ever need?

    - by Nuri Halperin
    As my MVC projects mature and grow, the need to have some omnipresent, ambient model properties quickly emerge. The application no longer has only one dynamic pieced of data on the page: A sidebar with a shopping cart, some news flash on the side – pretty common stuff. The rub is that a controller is invoked in context of a single intended request. The rest of the data, even though it could be just as dynamic, is expected to appear on it's own. There are many solutions to this scenario. MVVM prescribes creating elaborate objects which expose your new data as a property on some uber-object with more properties exposing the "side show" ambient data. The reason I don't love this approach is because it forces fairly acute awareness of the view, and soon enough you have many MVVM objects laying around, and views have to start doing null-checks in order to ensure you really supplied all the values before binding to them. Ick. Just as unattractive is the ViewData dictionary. It's not strongly typed, and in both this and the MVVM approach someone has to populate these properties – n'est pas? Where does that live? With MVC2, we get the formerly-futures  feature Html.RenderAction(). The feature allows you plant a line in a view, of the format: <% Html.RenderAction("SessionInterest", "Session"); %> While this syntax looks very clean, I can't help being bothered by it. MVC was touting a very strong separation of concerns, the Model taking on the role of the business logic, the controller handling route and performing minimal view-choosing operations and the views strictly focused on rendering out angled-bracket tags. The RenderAction() syntax has the view calling some controller and invoking it inline with it's runtime rendering. This – to my taste – embeds too much  knowledge of controllers into the view's code – which was allegedly forbidden.  The one way flow "Controller Receive Data –> Controller invoke Model –> Controller select view –> Controller Hand data to view" now gets a "View calls controller and gets it's own data" which is not so one-way anymore. Ick. I toyed with some other solutions a bit, including some base controllers, special view classes etc. My current favorite though is making use of the ExpandoObject and dynamic features with C# 4.0. If you follow Phil Haack or read a bit from David Heyden you can see the general picture emerging. The game changer is that using the new dynamic syntax, one can sprout properties on an object and make use of them in the view. Well that beats having a bunch of uni-purpose MVVM's any day! Rather than statically exposed properties, we'll just use the capability of adding members at runtime. Armed with new ideas and syntax, I went to work: First, I created a factory method to enrich the focuse object: public static class ModelExtension { public static dynamic Decorate(this Controller controller, object mainValue) { dynamic result = new ExpandoObject(); result.Value = mainValue; result.SessionInterest = CodeCampBL.SessoinInterest(); result.TagUsage = CodeCampBL.TagUsage(); return result; } } This gives me a nice fluent way to have the controller add the rest of the ambient "side show" items (SessionInterest, TagUsage in this demo) and expose them all as the Model: public ActionResult Index() { var data = SyndicationBL.Refresh(TWEET_SOURCE_URL); dynamic result = this.Decorate(data); return View(result); } So now what remains is that my view knows to expect a dynamic object (rather than statically typed) so that the ASP.NET page compiler won't barf: <%@ Page Language="C#" Title="Ambient Demo" MasterPageFile="~/Views/Shared/Ambient.Master" Inherits="System.Web.Mvc.ViewPage<dynamic>" %> Notice the generic ViewPage<dynamic>. It doesn't work otherwise. In the page itself, Model.Value property contains the main data returned from the controller. The nice thing about this, is that the master page (Ambient.Master) also inherits from the generic ViewMasterPage<dynamic>. So rather than the page worrying about all this ambient stuff, the side bars and panels for ambient data all reside in a master page, and can be rendered using the RenderPartial() syntax: <% Html.RenderPartial("TagCloud", Model.SessionInterest as Dictionary<string, int>); %> Note here that a cast is necessary. This is because although dynamic is magic, it can't figure out what type this property is, and wants you to give it a type so its binder can figure out the right property to bind to at runtime. I use as, you can cast if you like. So there we go – no violation of MVC, no explosion of MVVM models and voila – right? Well, I could not let this go without a tweak or two more. The first thing to improve, is that some views may not need all the properties. In that case, it would be a waste of resources to populate every property. The solution to this is simple: rather than exposing properties, I change d the factory method to expose lambdas - Func<T> really. So only if and when a view accesses a member of the dynamic object does it load the data. public static class ModelExtension { // take two.. lazy loading! public static dynamic LazyDecorate(this Controller c, object mainValue) { dynamic result = new ExpandoObject(); result.Value = mainValue; result.SessionInterest = new Func<Dictionary<string, int>>(() => CodeCampBL.SessoinInterest()); result.TagUsage = new Func<Dictionary<string, int>>(() => CodeCampBL.TagUsage()); return result; } } Now that lazy loading is in place, there's really no reason not to hook up all and any possible ambient property. Go nuts! Add them all in – they won't get invoked unless used. This now requires changing the signature of usage on the ambient properties methods –adding some parenthesis to the master view: <% Html.RenderPartial("TagCloud", Model.SessionInterest() as Dictionary<string, int>); %> And, of course, the controller needs to call LazyDecorate() rather than the old Decorate(). The final touch is to introduce a convenience method to the my Controller class , so that the tedium of calling Decorate() everywhere goes away. This is done quite simply by adding a bunch of methods, matching View(object), View(string,object) signatures of the Controller class: public ActionResult Index() { var data = SyndicationBL.Refresh(TWEET_SOURCE_URL); return AmbientView(data); } //these methods can reside in a base controller for the solution: public ViewResult AmbientView(dynamic data) { dynamic result = ModelExtension.LazyDecorate(this, data); return View(result); } public ViewResult AmbientView(string viewName, dynamic data) { dynamic result = ModelExtension.LazyDecorate(this, data); return View(viewName, result); } The call to AmbientView now replaces any call the View() that requires the ambient data. DRY sattisfied, lazy loading and no need to replace core pieces of the MVC pipeline. I call this a good MVC day. Enjoy!

    Read the article

  • Retro Video Game Collection

    - by Matt Christian
    Recently I've decided, in true nerd fashion, to collect either comic books or video games.  Considering I'm much more versed in the technological arts and not in ACTUAL art, I thought collecting old video games would be an interesting venture.  After all, I am a self-described compulsive shopper (my bank statement at the end of the month has a purchase every few days).  (Don't worry, I'm not in debt and still pay my bills on time!) I went to a local video game store in Stevens Point called Gaming Generations which is a neat little shop with loads of old games for great prices.  For example, any NES cartridge on the shelf (not behind glass) is, at most, $4.99 with the cheaper ones around $1.99.  During my first round at GG, I picked up the following: NES: - Fester's Quest - Adventures of Link (Zelda 2, grey cart) - Little Nemo - Total Recall - The Goonies 2 PSX: - Galerians N64: - Mission: Impossible - Hybrid Heaven I was a little cautious, would I even like collecting old games?  As soon as I popped a few of those games in I knew right away the answer was an astounding YES!  Not only is it fun to bring back memories of all these old games, but searching for them in stores is also a blast and saying 'I have that one, I need the second one.' After finding such joy in buying these games, I decided to go search through 4-5 stores in Wausau for old games as well.  While the prices were a bit higher and selection smaller, the search was still fun.  I found the following: NES: - Maniac Mansion - T&C Surf - Chip N Dale: Rescue Rangers - TMNT (the first one) - Mission: Impossible N64: - Turok - Turok 2 Genesis: - Sonic the Hedgehog Dreamcast: - Shenmue And I found a Gamegear for $5!  Now I just need to find games for it... Tonight I will go on one more small expedition into the used, once again stopping at GG and another second hand store to see if I can find any items for my collection.

    Read the article

  • how to reset the IIS settings ..

    - by infant programmer
    I had been practicing ASP on my local machine.. I just wanted to change the TCP port address to 8080 and that is what I did, since then the URL "http://localhost/" is showing PAGE CANNOT BE DISPLAYED, and technical reason being shown is : "You have attempted to execute a CGI, ISAPI, or other executable program from a directory that does not allow programs to be executed." I tried to change back the TCP port address to 80, (which is default) but its not making any difference. What should I do now, to make localhost to work as before ? When I create a virtual directory for the same path "C:/InetPub/wwwroot" then it works but with this URL "http://localhost/virualname/filename.asp" .. where as "http://localhost/filename.asp" throws error, as mentioned above. Can you please explain me what is this consequence is?? thank you :) Details: IIS verion is 7, OS XP,

    Read the article

  • How does firefox chose which plugin to use?

    - by Brian Postow
    I have written a plugin which allows the user to view multi-page TIFF images within firefox. The problem is that Quicktime also thinks it can view TIFF images. How does Firefox chose which one gets used? through trial and error I've determined that: Sometimes, my plugin (Accel VIewTIFF) just runs, and everyone is happy. Sometimes I need to disable Quicktime in the Add-ons window. Sometimes I need to remove the QuickTime plugin files from the / Library/Internet Plug-ins folder (I'm on a mac) Sometimes I can put QuickTime BACK after having successfully run Accel ViewTIFF. I've yet to figure out when (if ever) each step is necessary. Please help me escape the cargo cult! (or at least point me to the right place in a manual.) thanks.

    Read the article

< Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >