Search Results

Search found 1760 results on 71 pages for 'guy incognito'.

Page 68/71 | < Previous Page | 64 65 66 67 68 69 70 71  | Next Page >

  • Google Apps e-mail being rejected from some domains

    - by Paul J. Lucas
    I'm migrating e-mail for my domains to Google Apps' e-mail. Most everything seems to work except e-mail sent to any user at (at least) sonic.net is rejected with a message of the form (where any-address has been substituted for my friend's address): From: Mail Delivery Subsystem <[email protected]> Date: March 11, 2010 10:04:48 AM PST To: [email protected] Subject: Delivery Status Notification (Failure) Delivered-To: [email protected] Received: by 10.229.194.26 with SMTP id dw26cs8717qcb; Thu, 11 Mar 2010 10:04:48 -0800 (PST) Received: by 10.223.68.143 with SMTP id v15mr3841599fai.62.1268330688325; Thu, 11 Mar 2010 10:04:48 -0800 (PST) Received: by 10.223.68.143 with SMTP id v15mr5119424fai.62; Thu, 11 Mar 2010 10:04:48 -0800 (PST) Mime-Version: 1.0 Return-Path: <> X-Failed-Recipients: [email protected] Message-Id: <[email protected]> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 5.1.1 <[email protected]>... No such user here (state 13). And here are the headers from the message it bounces back: Received: by 10.101.90.7 with SMTP id s7mr2515885anl.176.1267979929490; Sun, 07 Mar 2010 08:38:49 -0800 (PST) Return-Path: <[email protected]> Received: from [10.0.1.203] (adsl-76-201-171-194.dsl.pltn13.sbcglobal.net [76.201.171.194]) by mx.google.com with ESMTPS id 4sm1046550yxd.70.2010.03.07.08.38.48 (version=TLSv1/SSLv3 cipher=RC4-MD5); Sun, 07 Mar 2010 08:38:49 -0800 (PST) From: "Paul J. Lucas" <[email protected]> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Subject: Some fascinating subject Date: Sun, 7 Mar 2010 08:38:46 -0800 References: <[email protected]> To: [email protected] Message-Id: <[email protected]> Mime-Version: 1.0 (Apple Message framework v1077) X-Mailer: Apple Mail (2.1077) However, I am able to send mail to a user at sonic.net using my old e-mail account. Also, my company uses Google Apps for e-mail and I can send e-mail to a user at sonic.net from my company. The differences between my personal e-mail and my company's are: My company's domain has no SPF record whereas mine does. My company's domain has an A record whereas mine does not. My SPF record initially was as prescribed by Google here. However, this guy claims Google is wrong and gives a fix. I've tried it both ways with no difference. My SPF record is currently: v=spf1 mx include:aspmx.googlemail.com include:_spf.google.com ~all As for the lack of an A record, you wouldn't think that a mail host would care about that so long as mx records are defined. However, the funny thing is that if you look at the error message, why does Google state that the recipient's domain stated that there is "No such user here" for my address? That makes no sense. Of course there is no user having my address at sonic.net. Also, I assume that I just discovered that I can't send mail to users at sonic.net by accident and that there are probably other domains I can't send e-mail to. So... anybody have any idea what's going on? And how I can get mail to users at sonic.net?

    Read the article

  • My Thoughts On the Xbox 180

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2013/06/21/my-thoughts-on-the-xbox-180.aspx Everyone seems to be putting their 0.00237 cents into the wishing well over Microsoft's recent decision to reverse the DRM policy on the Xbox One. However, there have been a few issues that nobody has touched. As such, I have decided to dig 0.00237 cents out of my pocket. First, let me be clear about this point. I do not support the decision to reverse the DRM policy on the Xbox One. I wanted that point to be expressed first and unambiguously. I will say it again. I do not support the decision to reverse the DRM policy on the Xbox One. Now that I have that out of the way, let me go into my rationale. This decision removes most of the cool features that enticed me to pre-order the console. No, I didn't cancel my pre-order. There is still five months before the release of the console, and there is still a plethora of information that we, as consumers, do not have. With that, it should be noted that much of the talk in this post is speculation and rhetoric. I do not have any insider information that you do not possess. The persistent connection would have allowed the console to do many of the functions for which we have been begging. That demo where someone was playing Ryse, seamlessly accepted a multiplayer challenge in Killer Instinct, played the match (and a rematch,) and then jumped back into Ryse. That's gone, if you bought the game on disc. The new, DRM free system will require the disc in the system to play a game. That bullet point where one Xbox Live account could have up to 10 slave accounts so families could play together, no matter where they were located. That's gone as well. The promise of huge, expansive, dynamically changing worlds that was brought to us with the power of cloud computing. Well, "the people" didn't want there to be a forced, persistent connection. As such, developers can't rely on a connection and, as such, that feature is gone. This is akin to the removal of the hard drive on the Xbox 360. The list continues, but the enthusiast press has enumerated the list far better than I wish. All of this is because the Xbox team saw the HUGE success of Steam and decided to borrow a few ideas. Yes, Steam. The service that everyone hated for the first six months (for the same reasons the Xbox One is getting flack.) There was an initial growing pain. However, it is now lauded as the way games distribution should be handled. Unless you are Microsoft. I do find it curious that many of the features were originally announced for the PS4 during its unveiling. However, much of that was left strangely absent for Sony's E3 press conference. Instead, we received a single, static slide that basically said the exact opposite of Microsoft's plans. It is not farfetched to believe that slide came into existence during the approximately seven hours between the two media briefings. The thing that majorly annoys me over this whole kerfuffle is that the single thing that caused the call to arms is, really, not an issue. Microsoft never said they were going to block used sales. They said it was up to the publisher to make that decision. This would have allowed publishers to reclaim some of the costs of development in subsequent sales of the product. If you sell your game to GameStop for 7 USD, GameStop is going to sell it for 55 USD. That is 48 USD pure profit for them. Some publishers asked GameStop for a small cut. Was this a huge, money grubbing scheme? Well, yes, but the idea was that they have to handle server infrastructure for dormant accounts, etc. Of course, GameStop flatly refused, and the Online Pass was born. Fortunately, this trend didn’t last, and most publishers have stopped the practice. The ability to sell "licenses" has already begun to be challenged. Are you living in the EU? If so, companies must allow you to sell digital property. With this precedent in place, it's only a matter of time before other areas follow suit. If GameStop were smart, they should have immediately contacted every publisher out there to get the rights to become a clearing house for these licenses. Then, they keep their business model and could reduce their brick and mortar footprint. The digital landscape is changing. We need to not block this process. As Seth MacFarlane best said "Some issues are so important that you should drag people kicking and screaming." I believe this was said on an episode of Real Time with Bill Maher about the issue of Gay Marriages. Much like the original source, this is an issue that we need to drag people to the correct, progressive position. Microsoft, as a company, actually has the resources to weather the transition period. They have a great pool of first and second party developers that can leverage this new framework to prove the validity. Over time, the third party developers will get excited to use these tools. As an old C++ guy, I resisted C# for years. Now, I think it's one of the best languages I've ever used. I have a server room and a Co-Lo full of servers, so I originally didn't see the value in Azure. Now, I wish I could move every one of my projects into the cloud. I still LOVE getting physical packaging, which my music and games collection will proudly attest. However, I have started to see the value in pure digital, and have found ways to integrate this into the ways I consume those products. I can, honestly, understand how some parts of the population would be very apprehensive about this new landscape. There were valid arguments about people with no internet access. There are ways to combat these problems. These methods do not require us to throw the baby out with the bathwater. However, the number of people in the computer industry that I have seen cry foul is truly appalling. We are the forward looking people that help show how technology can improve people's lives. If we can't see the value of the brief pain involved with an exciting new ecosystem, than who will?

    Read the article

  • Blank black screen with cursor after login -- RHEL5

    - by Sean O.
    I have a RHEL 5 machine here which is a Dell Precision T3500. I'm an Ubuntu guy, but I'm having a heck of a time with this machine. After processing its first security update, we cannot log in via the gdm greeter. A new kernel was installed; then I installed the nVidia drivers for our Quadro NVS 295. I know the X configuration is valid because the gdm greeter does display; however, upon login all we can get is a blank, black screen with a cursor. I thought perhaps our python installation was corrupted but a reinstall via yum has not helped. I have searched & googled extensively for a potential fix for this and can find nothing. Below are outputs from uname, a tail of an error in /var/log/messages, and the Xorg.conf. Can anyone suggest a course of action? [sean@cheetah ~]$ uname -a Linux cheetah.*.* 2.6.18-308.8.1.el5 #1 SMP Fri May 4 16:43:02 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux [sean@cheetah ~]$ sudo tail /var/log/messages Jun 5 15:03:04 cheetah gconfd (sean-4592): Resolved address "xml:readonly:/etc/gconf/gconf.xml.defaults" to a read-only configuration source at position 2 Jun 5 15:03:05 cheetah hcid[3855]: Default passkey agent (:1.8, /org/bluez/applet) registered Jun 5 15:03:05 cheetah pcscd: winscard.c:304:SCardConnect() Reader E-Gate 0 0 Not Found Jun 5 15:03:05 cheetah last message repeated 2 times Jun 5 15:03:06 cheetah gconfd (sean-4592): Resolved address "xml:readwrite:/home/sean/.gconf" to a writable configuration source at position 0 Jun 5 15:03:06 cheetah setroubleshoot: [program.ERROR] exception ImportError: /usr/lib/libatk-1.0.so.0: undefined symbol: g_assertion_message_expr Traceback (most recent call last): File "/usr/bin/sealert", line 952, in ? from setroubleshoot.gui_utils import * File "/usr/lib/python2.4/site-packages/setroubleshoot/gui_utils.py", line 26, in ? import gtk File "/usr/lib64/python2.4/site-packages/gtk-2.0/gtk/__init__.py", line 48, in ? from gtk import _gtk ImportError: /usr/lib/libatk-1.0.so.0: undefined symbol: g_assertion_message_expr Jun 5 15:03:07 cheetah setroubleshoot: [program.ERROR] exception ImportError: /usr/lib/libatk-1.0.so.0: undefined symbol: g_assertion_message_expr Traceback (most recent call last): File "/usr/bin/sealert", line 952, in ? from setroubleshoot.gui_utils import * File "/usr/lib/python2.4/site-packages/setroubleshoot/gui_utils.py", line 26, in ? import gtk File "/usr/lib64/python2.4/site-packages/gtk-2.0/gtk/__init__.py", line 48, in ? from gtk import _gtk ImportError: /usr/lib/libatk-1.0.so.0: undefined symbol: g_assertion_message_expr Jun 5 15:03:08 cheetah pcscd: winscard.c:304:SCardConnect() Reader E-Gate 0 0 Not Found Jun 5 15:07:01 cheetah ntpd[4114]: synchronized to 64.16.211.38, stratum 3 Jun 5 15:07:01 cheetah ntpd[4114]: kernel time sync enabled 0001 [sean@cheetah ~]$ cat /etc/X11/xorg.conf # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 295.53 ([email protected]) Sat May 12 00:34:20 PDT 2012 # Xorg configuration created by system-config-display Section "ServerLayout" Identifier "single head configuration" Screen 0 "Screen0" 0 0 InputDevice "Mouse0" "CorePointer" InputDevice "Keyboard0" "CoreKeyboard" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" Option "XkbModel" "pc105" Option "XkbLayout" "us" EndSection Section "Monitor" ### Comment all HorizSync and VertSync values to use DDC: ### Comment all HorizSync and VertSync values to use DDC: Identifier "Monitor0" ModelName "LCD Panel 1600x1200" HorizSync 31.5 - 74.7 VertRefresh 56.0 - 65.0 Option "dpms" EndSection Section "Device" Identifier "Videocard0" Driver "nvidia" EndSection Section "Screen" Identifier "Screen0" Device "Videocard0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

  • Troubleshooting inconsistent ODBC connectivity

    - by Chris
    I'm attempting to integrate UPS WorldShip with a SQL Server 2008 R2 database but the connection is very inconsistent. UPS claims this is a DSN/Windows problem and I have not been able to convince them otherwise. The integration is quite simple: my shipping guy clicks a button which opens a form where he enters an order #. After pressing enter the shipping information will be pulled from the database for that order #. The problem is that WorldShip often times thinks the DSN does not exist. However, I am able to open WorldShip's customization tool and browse all the tables and fields in the database my DSN is connected to which means at the very least my DSN does, in fact, exist. The reason this has been so difficult to troubleshoot is because there is no consistency to the problem and I'm not able to reliably repeat any behavior. That is to say that rebooting the PC doesn't cause the connection to break and opening the integration tool and viewing the tables and fields doesn't cause the integration button to work. Is there some way for me to monitor this connection from the SQL server or get any clues as to why it fails? As requested by TallTed here is a sample of the trace file I created. After a mere 5 hours the trace file was over 130MB so there's no way I could provide it in its entirety. WorldShipTD d94-690 EXIT SQLSetStmtAttrW with return code -1 (SQL_ERROR) SQLHSTMT 0x0C6632A0 SQLINTEGER 1227 <unknown> SQLPOINTER [Unknown attribute 1227] SQLINTEGER -5 DIAG [IM006] [Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed (0) WorldShipTD d94-690 ENTER SQLAllocHandle SQLSMALLINT 3 <SQL_HANDLE_STMT> SQLHANDLE 0x0C662FC0 SQLHANDLE * 0x03EBCE38 WorldShipTD d94-690 EXIT SQLAllocHandle with return code 0 (SQL_SUCCESS) SQLSMALLINT 3 <SQL_HANDLE_STMT> SQLHANDLE 0x0C662FC0 SQLHANDLE * 0x03EBCE38 ( 0x0C6632A0) WorldShipTD d94-690 ENTER SQLSetStmtAttrW SQLHSTMT 0x0C6632A0 SQLINTEGER 0 <SQL_ATTR_QUERY_TIMEOUT> SQLPOINTER 30 SQLINTEGER -5 WorldShipTD d94-690 EXIT SQLSetStmtAttrW with return code -1 (SQL_ERROR) SQLHSTMT 0x0C6632A0 SQLINTEGER 0 <SQL_ATTR_QUERY_TIMEOUT> SQLPOINTER 30 SQLINTEGER -5 DIAG [HYC00] [Microsoft][ODBC Microsoft Access Driver]Optional feature not implemented (106) WorldShipTD d94-690 ENTER SQLGetDiagFieldW SQLSMALLINT 3 SQLHANDLE 0x0C6632A0 SQLSMALLINT 1 SQLSMALLINT 4 SQLPOINTER 0x00520708 SQLSMALLINT 12 SQLSMALLINT * 0x0028E2A8 WorldShipTD d94-690 EXIT SQLGetDiagFieldW with return code 0 (SQL_SUCCESS) SQLSMALLINT 3 SQLHANDLE 0x0C6632A0 SQLSMALLINT 1 SQLSMALLINT 4 SQLPOINTER 0x00520708 SQLSMALLINT 12 SQLSMALLINT * 0x0028E2A8 (10) WorldShipTD d94-690 ENTER SQLGetInfoW HDBC 0x0C662FC0 UWORD 77 <SQL_DRIVER_ODBC_VER> PTR 0x03EBCEDC SWORD 100 SWORD * 0x0028E290 WorldShipTD d94-690 EXIT SQLGetInfoW with return code 0 (SQL_SUCCESS) HDBC 0x0C662FC0 UWORD 77 <SQL_DRIVER_ODBC_VER> PTR 0x03EBCEDC [ 10] "03.51" SWORD 100 SWORD * 0x0028E290 (10) WorldShipTD d94-690 ENTER SQLSetStmtAttrW SQLHSTMT 0x0C6632A0 SQLINTEGER 1228 <unknown> SQLPOINTER [Unknown attribute 1228] SQLINTEGER -5 WorldShipTD d94-690 EXIT SQLSetStmtAttrW with return code -1 (SQL_ERROR) SQLHSTMT 0x0C6632A0 SQLINTEGER 1228 <unknown> SQLPOINTER [Unknown attribute 1228] SQLINTEGER -5 DIAG [HY092] [Microsoft][ODBC Microsoft Access Driver]Invalid attribute/option identifier (86) WorldShipTD d94-690 ENTER SQLGetDiagFieldW SQLSMALLINT 3 SQLHANDLE 0x0C6632A0 SQLSMALLINT 1 SQLSMALLINT 4 SQLPOINTER 0x00520708 SQLSMALLINT 12 SQLSMALLINT * 0x0028E2A8 WorldShipTD d94-690 EXIT SQLGetDiagFieldW with return code 0 (SQL_SUCCESS) SQLSMALLINT 3 SQLHANDLE 0x0C6632A0 SQLSMALLINT 1 SQLSMALLINT 4 SQLPOINTER 0x00520708 SQLSMALLINT 12 SQLSMALLINT * 0x0028E2A8 (10) WorldShipTD d94-690 ENTER SQLSetStmtAttrW SQLHSTMT 0x0C6632A0 SQLINTEGER 1227 <unknown> SQLPOINTER [Unknown attribute 1227] SQLINTEGER -5 WorldShipTD d94-690 EXIT SQLSetStmtAttrW with return code -1 (SQL_ERROR) SQLHSTMT 0x0C6632A0 SQLINTEGER 1227 <unknown> SQLPOINTER [Unknown attribute 1227] SQLINTEGER -5 DIAG [HY092] [Microsoft][ODBC Microsoft Access Driver]Invalid attribute/option identifier (86) WorldShipTD d94-690 ENTER SQLGetDiagFieldW SQLSMALLINT 3 SQLHANDLE 0x0C6632A0 SQLSMALLINT 1 SQLSMALLINT 4 SQLPOINTER 0x00520708 SQLSMALLINT 12 SQLSMALLINT * 0x0028E2A8 WorldShipTD d94-690 EXIT SQLGetDiagFieldW with return code 0 (SQL_SUCCESS) SQLSMALLINT 3 SQLHANDLE 0x0C6632A0 SQLSMALLINT 1 SQLSMALLINT 4 SQLPOINTER 0x00520708 SQLSMALLINT 12 SQLSMALLINT * 0x0028E2A8 (10)

    Read the article

  • T-SQL Tuesday #31 - Logging Tricks with CONTEXT_INFO

    - by Most Valuable Yak (Rob Volk)
    This month's T-SQL Tuesday is being hosted by Aaron Nelson [b | t], fellow Atlantan (the city in Georgia, not the famous sunken city, or the resort in the Bahamas) and covers the topic of logging (the recording of information, not the harvesting of trees) and maintains the fine T-SQL Tuesday tradition begun by Adam Machanic [b | t] (the SQL Server guru, not the guy who fixes cars, check the spelling again, there will be a quiz later). This is a trick I learned from Fernando Guerrero [b | t] waaaaaay back during the PASS Summit 2004 in sunny, hurricane-infested Orlando, during his session on Secret SQL Server (not sure if that's the correct title, and I haven't used parentheses in this paragraph yet).  CONTEXT_INFO is a neat little feature that's existed since SQL Server 2000 and perhaps even earlier.  It lets you assign data to the current session/connection, and maintains that data until you disconnect or change it.  In addition to the CONTEXT_INFO() function, you can also query the context_info column in sys.dm_exec_sessions, or even sysprocesses if you're still running SQL Server 2000, if you need to see it for another session. While you're limited to 128 bytes, one big advantage that CONTEXT_INFO has is that it's independent of any transactions.  If you've ever logged to a table in a transaction and then lost messages when it rolled back, you can understand how aggravating it can be.  CONTEXT_INFO also survives across multiple SQL batches (GO separators) in the same connection, so for those of you who were going to suggest "just log to a table variable, they don't get rolled back":  HA-HA, I GOT YOU!  Since GO starts a new batch all variable declarations are lost. Here's a simple example I recently used at work.  I had to test database mirroring configurations for disaster recovery scenarios and measure the network throughput.  I also needed to log how long it took for the script to run and include the mirror settings for the database in question.  I decided to use AdventureWorks as my database model, and Adam Machanic's Big Adventure script to provide a fairly large workload that's repeatable and easily scalable.  My test would consist of several copies of AdventureWorks running the Big Adventure script while I mirrored the databases (or not). Since Adam's script contains several batches, I decided CONTEXT_INFO would have to be used.  As it turns out, I only needed to grab the start time at the beginning, I could get the rest of the data at the end of the process.   The code is pretty small: declare @time binary(128)=cast(getdate() as binary(8)) set context_info @time   ... rest of Big Adventure code ...   go use master; insert mirror_test(server,role,partner,db,state,safety,start,duration) select @@servername, mirroring_role_desc, mirroring_partner_instance, db_name(database_id), mirroring_state_desc, mirroring_safety_level_desc, cast(cast(context_info() as binary(8)) as datetime), datediff(s,cast(cast(context_info() as binary(8)) as datetime),getdate()) from sys.database_mirroring where db_name(database_id) like 'Adv%';   I declared @time as a binary(128) since CONTEXT_INFO is defined that way.  I couldn't convert GETDATE() to binary(128) as it would pad the first 120 bytes as 0x00.  To keep the CAST functions simple and avoid using SUBSTRING, I decided to CAST GETDATE() as binary(8) and let SQL Server do the implicit conversion.  It's not the safest way perhaps, but it works on my machine. :) As I mentioned earlier, you can query system views for sessions and get their CONTEXT_INFO.  With a little boilerplate code this can be used to monitor long-running procedures, in case you need to kill a process, or are just curious  how long certain parts take.  In this example, I added code to Adam's Big Adventure script to set CONTEXT_INFO messages at strategic places I want to monitor.  (His code is in UPPERCASE as it was in the original, mine is all lowercase): declare @msg binary(128) set @msg=cast('Altering bigProduct.ProductID' as binary(128)) set context_info @msg go ALTER TABLE bigProduct ALTER COLUMN ProductID INT NOT NULL GO set context_info 0x0 go declare @msg1 binary(128) set @msg1=cast('Adding pk_bigProduct Constraint' as binary(128)) set context_info @msg1 go ALTER TABLE bigProduct ADD CONSTRAINT pk_bigProduct PRIMARY KEY (ProductID) GO set context_info 0x0 go declare @msg2 binary(128) set @msg2=cast('Altering bigTransactionHistory.TransactionID' as binary(128)) set context_info @msg2 go ALTER TABLE bigTransactionHistory ALTER COLUMN TransactionID INT NOT NULL GO set context_info 0x0 go declare @msg3 binary(128) set @msg3=cast('Adding pk_bigTransactionHistory Constraint' as binary(128)) set context_info @msg3 go ALTER TABLE bigTransactionHistory ADD CONSTRAINT pk_bigTransactionHistory PRIMARY KEY NONCLUSTERED(TransactionID) GO set context_info 0x0 go declare @msg4 binary(128) set @msg4=cast('Creating IX_ProductId_TransactionDate Index' as binary(128)) set context_info @msg4 go CREATE NONCLUSTERED INDEX IX_ProductId_TransactionDate ON bigTransactionHistory(ProductId,TransactionDate) INCLUDE(Quantity,ActualCost) GO set context_info 0x0   This doesn't include the entire script, only those portions that altered a table or created an index.  One annoyance is that SET CONTEXT_INFO requires a literal or variable, you can't use an expression.  And since GO starts a new batch I need to declare a variable in each one.  And of course I have to use CAST because it won't implicitly convert varchar to binary.  And even though context_info is a nullable column, you can't SET CONTEXT_INFO NULL, so I have to use SET CONTEXT_INFO 0x0 to clear the message after the statement completes.  And if you're thinking of turning this into a UDF, you can't, although a stored procedure would work. So what does all this aggravation get you?  As the code runs, if I want to see which stage the session is at, I can run the following (assuming SPID 51 is the one I want): select CAST(context_info as varchar(128)) from sys.dm_exec_sessions where session_id=51   Since SQL Server 2005 introduced the new system and dynamic management views (DMVs) there's not as much need for tagging a session with these kinds of messages.  You can get the session start time and currently executing statement from them, and neatly presented if you use Adam's sp_whoisactive utility (and you absolutely should be using it).  Of course you can always use xp_cmdshell, a CLR function, or some other tricks to log information outside of a SQL transaction.  All the same, I've used this trick to monitor long-running reports at a previous job, and I still think CONTEXT_INFO is a great feature, especially if you're still using SQL Server 2000 or want to supplement your instrumentation.  If you'd like an exercise, consider adding the system time to the messages in the last example, and an automated job to query and parse it from the system tables.  That would let you track how long each statement ran without having to run Profiler. #TSQL2sDay

    Read the article

  • Microsoft Business Intelligence Seminar 2011

    - by DavidWimbush
    I was lucky enough to attend the maiden presentation of this at Microsoft Reading yesterday. It was pretty gripping stuff not only because of what was said but also because of what could only be hinted at. Here's what I took away from the day. (Disclaimer: I'm not a BI guru, just a reasonably experienced BI developer, so I may have misunderstood or misinterpreted a few things. Particularly when so much of the talk was about the vision and subtle hints of what is coming. Please comment if you think I've got anything wrong. I'm also not going to even try to cover Master Data Services as I struggled to imagine how you would actually use it.) I was a bit worried when I learned that the whole day was going to be presented by one guy but Rafal Lukawiecki is a very engaging speaker. He's going to be presenting this about 20 times around the world over the coming months. If you get a chance to hear him speak, I say go for it. No doubt some of the hints will become clearer as Denali gets closer to RTM. Firstly, things are definitely happening in the SQL Server Reporting and BI world. Traditionally IT would build a data warehouse, then cubes on top of that, and then publish them in a structured and controlled way. But, just as with many IT projects in general, by the time it's finished the business has moved on and the system no longer meets their requirements. This not sustainable and something more agile is needed but there has to be some control. Apparently we're going to be hearing the catchphrase 'Balancing agility with control' a lot. More users want more access to more data. Can they define what they want? Of course not, but they'll recognise it when they see it. It's estimated that only 28% of potential BI users have meaningful access to the data they need, so there is a real pent-up demand. The answer looks like: give them some self-service tools so they can experiment and see what works, and then IT can help to support the results. It's estimated that 32% of Excel users are comfortable with its analysis tools such as pivot tables. It's the power user's preferred tool. Why fight it? That's why PowerPivot is an Excel add-in and that's why they released a Data Mining add-in for it as well. It does appear that the strategy is going to be to use Reporting Services (in SharePoint mode), PowerPivot, and possibly something new (smiles and hints but no details) to create reports and explore data. Everything will be published and managed in SharePoint which gives users the ability to mash-up, share and socialise what they've found out. SharePoint also gives IT tools to understand what people are looking at and where to concentrate effort. If PowerPivot report X becomes widely used, it's time to check that it shows what they think it does and perhaps get it a bit more under central control. There was more SharePoint detail that went slightly over my head regarding where Excel Services and Excel Web Application fit in, the differences between them, and the suggestion that it is likely they will one day become one (but not in the immediate future). That basic pattern is set to be expanded upon by further exploiting Vertipaq (the columnar indexing engine that enables PowerPivot to store and process a lot of data fast and in a small memory footprint) to provide scalability 'from the desktop to the data centre', and some yet to be detailed advances in 'frictionless deployment' (part of which is about making the difference between local and the cloud pretty much irrelevant). Excel looks like becoming Microsoft's primary BI client. It already has: the ability to consume cubes strong visualisation tools slicers (which are part of Excel not PowerPivot) a data mining add-in PowerPivot A major hurdle for self-service BI is presenting the data in a consumable format. You can't just give users PowerPivot and a server with a copy of the OLTP database(s). Building cubes is labour intensive and doesn't always give the user what they need. This is where the BI Semantic Model (BISM) comes in. I gather it's a layer of metadata you define that can combine multiple data sources (and types of data source) into a clear 'interface' that users can work with. It comes with a new query language called DAX. SSAS cubes are unlikely to go away overnight because, with their pre-calculated results, they are still the most efficient way to work with really big data sets. A few other random titbits that came up: Reporting Services is going to get some good new stuff in Denali. Keep an eye on www.projectbotticelli.com for the slides. You can also view last year's seminar sessions which covered a lot of the same ground as far as the overall strategy is concerned. They plan to add more material as Denali's features are publicly exposed. Check out the PASS keynote address for a showing of Yahoo's SQL BI servers. Apparently they wheeled the rack out on stage still plugged in and running! Check out the Excel 2010 Data Mining Add-Ins. 32 bit only at present but 64 bit is on the way. There are lots of data sets, many of them free, at the Windows Azure Marketplace Data Market (where you can also get ESRI shape files). If you haven't already seen it, have a look at the Silverlight Pivot Viewer (http://weblogs.asp.net/scottgu/archive/2010/06/29/silverlight-pivotviewer-now-available.aspx). The Bing Maps Data Connector is worth a look if you're into spatial stuff (http://www.bing.com/community/site_blogs/b/maps/archive/2010/07/13/data-connector-sql-server-2008-spatial-amp-bing-maps.aspx).  

    Read the article

  • T-SQL Tuesday #31 - Logging Tricks with CONTEXT_INFO

    - by Most Valuable Yak (Rob Volk)
    This month's T-SQL Tuesday is being hosted by Aaron Nelson [b | t], fellow Atlantan (the city in Georgia, not the famous sunken city, or the resort in the Bahamas) and covers the topic of logging (the recording of information, not the harvesting of trees) and maintains the fine T-SQL Tuesday tradition begun by Adam Machanic [b | t] (the SQL Server guru, not the guy who fixes cars, check the spelling again, there will be a quiz later). This is a trick I learned from Fernando Guerrero [b | t] waaaaaay back during the PASS Summit 2004 in sunny, hurricane-infested Orlando, during his session on Secret SQL Server (not sure if that's the correct title, and I haven't used parentheses in this paragraph yet).  CONTEXT_INFO is a neat little feature that's existed since SQL Server 2000 and perhaps even earlier.  It lets you assign data to the current session/connection, and maintains that data until you disconnect or change it.  In addition to the CONTEXT_INFO() function, you can also query the context_info column in sys.dm_exec_sessions, or even sysprocesses if you're still running SQL Server 2000, if you need to see it for another session. While you're limited to 128 bytes, one big advantage that CONTEXT_INFO has is that it's independent of any transactions.  If you've ever logged to a table in a transaction and then lost messages when it rolled back, you can understand how aggravating it can be.  CONTEXT_INFO also survives across multiple SQL batches (GO separators) in the same connection, so for those of you who were going to suggest "just log to a table variable, they don't get rolled back":  HA-HA, I GOT YOU!  Since GO starts a new batch all variable declarations are lost. Here's a simple example I recently used at work.  I had to test database mirroring configurations for disaster recovery scenarios and measure the network throughput.  I also needed to log how long it took for the script to run and include the mirror settings for the database in question.  I decided to use AdventureWorks as my database model, and Adam Machanic's Big Adventure script to provide a fairly large workload that's repeatable and easily scalable.  My test would consist of several copies of AdventureWorks running the Big Adventure script while I mirrored the databases (or not). Since Adam's script contains several batches, I decided CONTEXT_INFO would have to be used.  As it turns out, I only needed to grab the start time at the beginning, I could get the rest of the data at the end of the process.   The code is pretty small: declare @time binary(128)=cast(getdate() as binary(8)) set context_info @time   ... rest of Big Adventure code ...   go use master; insert mirror_test(server,role,partner,db,state,safety,start,duration) select @@servername, mirroring_role_desc, mirroring_partner_instance, db_name(database_id), mirroring_state_desc, mirroring_safety_level_desc, cast(cast(context_info() as binary(8)) as datetime), datediff(s,cast(cast(context_info() as binary(8)) as datetime),getdate()) from sys.database_mirroring where db_name(database_id) like 'Adv%';   I declared @time as a binary(128) since CONTEXT_INFO is defined that way.  I couldn't convert GETDATE() to binary(128) as it would pad the first 120 bytes as 0x00.  To keep the CAST functions simple and avoid using SUBSTRING, I decided to CAST GETDATE() as binary(8) and let SQL Server do the implicit conversion.  It's not the safest way perhaps, but it works on my machine. :) As I mentioned earlier, you can query system views for sessions and get their CONTEXT_INFO.  With a little boilerplate code this can be used to monitor long-running procedures, in case you need to kill a process, or are just curious  how long certain parts take.  In this example, I added code to Adam's Big Adventure script to set CONTEXT_INFO messages at strategic places I want to monitor.  (His code is in UPPERCASE as it was in the original, mine is all lowercase): declare @msg binary(128) set @msg=cast('Altering bigProduct.ProductID' as binary(128)) set context_info @msg go ALTER TABLE bigProduct ALTER COLUMN ProductID INT NOT NULL GO set context_info 0x0 go declare @msg1 binary(128) set @msg1=cast('Adding pk_bigProduct Constraint' as binary(128)) set context_info @msg1 go ALTER TABLE bigProduct ADD CONSTRAINT pk_bigProduct PRIMARY KEY (ProductID) GO set context_info 0x0 go declare @msg2 binary(128) set @msg2=cast('Altering bigTransactionHistory.TransactionID' as binary(128)) set context_info @msg2 go ALTER TABLE bigTransactionHistory ALTER COLUMN TransactionID INT NOT NULL GO set context_info 0x0 go declare @msg3 binary(128) set @msg3=cast('Adding pk_bigTransactionHistory Constraint' as binary(128)) set context_info @msg3 go ALTER TABLE bigTransactionHistory ADD CONSTRAINT pk_bigTransactionHistory PRIMARY KEY NONCLUSTERED(TransactionID) GO set context_info 0x0 go declare @msg4 binary(128) set @msg4=cast('Creating IX_ProductId_TransactionDate Index' as binary(128)) set context_info @msg4 go CREATE NONCLUSTERED INDEX IX_ProductId_TransactionDate ON bigTransactionHistory(ProductId,TransactionDate) INCLUDE(Quantity,ActualCost) GO set context_info 0x0   This doesn't include the entire script, only those portions that altered a table or created an index.  One annoyance is that SET CONTEXT_INFO requires a literal or variable, you can't use an expression.  And since GO starts a new batch I need to declare a variable in each one.  And of course I have to use CAST because it won't implicitly convert varchar to binary.  And even though context_info is a nullable column, you can't SET CONTEXT_INFO NULL, so I have to use SET CONTEXT_INFO 0x0 to clear the message after the statement completes.  And if you're thinking of turning this into a UDF, you can't, although a stored procedure would work. So what does all this aggravation get you?  As the code runs, if I want to see which stage the session is at, I can run the following (assuming SPID 51 is the one I want): select CAST(context_info as varchar(128)) from sys.dm_exec_sessions where session_id=51   Since SQL Server 2005 introduced the new system and dynamic management views (DMVs) there's not as much need for tagging a session with these kinds of messages.  You can get the session start time and currently executing statement from them, and neatly presented if you use Adam's sp_whoisactive utility (and you absolutely should be using it).  Of course you can always use xp_cmdshell, a CLR function, or some other tricks to log information outside of a SQL transaction.  All the same, I've used this trick to monitor long-running reports at a previous job, and I still think CONTEXT_INFO is a great feature, especially if you're still using SQL Server 2000 or want to supplement your instrumentation.  If you'd like an exercise, consider adding the system time to the messages in the last example, and an automated job to query and parse it from the system tables.  That would let you track how long each statement ran without having to run Profiler. #TSQL2sDay

    Read the article

  • Single-Signon options for Exchange 2010

    - by freiheit
    We're working on a project to migrate employee email from Unix/open-source (courier IMAP, exim, squirrelmail, etc) to Exchange 2010, and trying to figure out options for single-signon for Outlook Web Access. So far all the options I've found are very ugly and "unsupportable", and may simply not work with Forefront. We already have JA-SIG CAS for token-based single-signon and Shibboleth for SAML. Users are directed to a simple in-house portal (a Perl CGI, really) that they use to sign in to most stuff. We have an HA OpenLDAP cluster that's already synchronized against another AD domain and will be synchronized with the AD domain Exchange will be using. CAS authenticates against LDAP. The portal authenticates against CAS. Shibboleth authenticates with CAS but pulls additional data from LDAP. We're moving in the direction of having web services authenticate against CAS or Shibboleth. (Students are already on SAML/Shibboleth authenticated Google Apps for Education) With Squirrelmail we have a horrible hack linked to from that portal page that authenticates against CAS, gets your original plaintext password (yes, I know, evil), and gives you an HTTP form pre-filled with all the necessary squirrelmail login details with javaScript onLoad stuff to immediately submit the form. Trying to find out exactly what is possible with Exchange/OWA seems to be difficult. "CAS" is both the acronym for our single-signon server and an Exchange component. From what I've been able to tell there's an addon for Exchange that does SAML, but only for federating things like free/busy calendar info, not authenticating users. Plus it costs additional money so there's no way to experiment with it to see if it can be coaxed into doing what we want. Our plans for the Exchange cluster involve Forefront Threat Management Gateway (the new ISA) in the DMZ front-ending the CAS servers. So, the real question: Has anybody managed to make Exchange authenticate with CAS (token-based single-signon) or SAML, or with something I can reasonably likely make authenticate with one of those (such as anything that will accept apache's authentication)? With Forefront? Failing that, anybody have some tips on convincing OWA Forms Based Authentication (FBA) into letting us somehow "pre-login" the user? (log in as them and pass back cookies to the user, or giving the user a pre-filled form that autosubmits like we do with squirrelmail). This is the least-favorite option for a number of reasons, but it would (just barely) satisfy our requirements. From what I hear from the guy implementing Forefront, we may have to set OWA to basic authentication and do forms in Forefront for authentication, so it's possible this isn't even possible. I did find CasOwa, but it only mentions Exchange 2007, looks kinda scary, and as near as I can tell is mostly the same OWA FBA hack I was considering slightly more integrated with the CAS server. It also didn't look like many people had had much success with it. And it may not work with Forefront. There's also "CASifying Outlook Web Access 2", but that one scares me, too, and involves setting up a complex proxy config, which seems more likely to break. And, again, doesn't look like it would work with Forefront. Am I missing something with Exchange SAML (OWA Federated whatchamacallit) where it is possible to configure to do user authentication and not just free/busy access authorization?

    Read the article

  • Issues with Verizon's "Network Extender" device talking on my home network.

    - by Logan
    I recently switched my phone service to Verizon from ATT, and I get somewhat spotty service in my house. I called them and they sent me a "network extender" device for free. Its a femtocell that connects to my home network. The directions that come with it are very dumbed down, basically just say to connect it to your router and put it near a window (so it can get a GPS signal, it has to make sure its within the correct area before operating). The problem I'm having is the network light on it stays red. The troubleshooting information that came with it tells me this means there is a bad network connection. Its connected through an ASUS router running DD-WRT. No other devices on my network have a problem with it, including a Western Digital WDLIVE device, mine and my wife's cell phones (via wifi), a Wii, and an Xbox. If I connect the device directly to my cable modem, the light goes blue (which means good) and it starts working. So this tells me that its definately a configuration issue with my router. Verizon basically washed their hands of me when I connected it to my cable modem, and told me that its a router issue and to try a different router. Because normal people just have extra routers laying around their houses... When I connect it to the router, I can watch the DHCP Clients list on the status page, and the MAC of the network extender quickly fills up the clients list, grabbing every available DHCP address. Its like it grabs an address, can't connect to the internet, releases it, grabs another, then another, then another. So in the DHCP server settings I assigned a static IP to its MAC. This made it quit doing what it was doing before, but its still not working. I found the ports I needed to open on verizon's website, and opened them in the port forwarding config on my router. This still didn't help. So, I tried setting the network extender device's IP as the DMZ IP on the router. This still did no good. I called Verizon back and got the tech to write up a report which he passed on to a "senior network tech" who I got a call back from a few hours ago. This guy told me that while an ASUS router isn't listed as a supported device, he's not really sure why its not working. He suggested restoring the firmware to stock ASUS firmware and trying again. I have a very hard time believing its DD-WRT doing this, since every other device is working just fine with it. But its also not the Network Extender, since it works just fine when connected directly to the modem. At this point I'm out of ideas, and the next step is to restore the stock firmware on my router, and then going to walmart and getting a linksys WRT-54G to try. Is there anything else I could try before going that drastic? Cliffs- -Network extender won't work behind router, works when connected directly to cable modem. -Extender goes nuts when allowed to pick its own DHCP address, I had to assign it a static IP. -Won't work when correct ports are forwarded to it -Won't work with a DMZ address.

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • Agile Like Jazz

    - by Jeff Certain
    (I’ve been sitting on this for a week or so now, thinking that it needed to be tightened up a bit to make it less rambling. Since that’s clearly not going to happen, reader beware!) I had the privilege of spending around 90 minutes last night sitting and listening to Sonny Rollins play a concert at the Disney Center in LA. If you don’t know who Sonny Rollins is, I don’t know how to explain the experience; if you know who he is, I don’t need to. Suffice it to say that he has been recording professionally for over 50 years, and helped create an entire genre of music. A true master by any definition. One of the most intriguing aspects of a concert like this, however, is watching the master step aside and let the rest of the musicians play. Not just play their parts, but really play… letting them take over the spotlight, to strut their stuff, to soak up enthusiastic applause from the crowd. Maybe a lot of it has to do with the fact that Sonny Rollins has been doing this for more than a half-century. Maybe it has something to do with a kind of patience you learn when you’re on the far side of 80 – and the man can still blow a mean sax for 90 minutes without stopping! Maybe it has to do with the fact that he was out there for the love of the music and the love of the show, not because he had anything to prove to anyone and, I like to think, not for the money. Perhaps it had more to do with the fact that, when you’re at that level of mastery, the other musicians are going to be good. Really good. Whatever the reasons, there was a incredible freedom on that stage – the ability to improvise, for each musician to showcase their own specialization and skills, and them come back to the common theme, back to being on the same page, as it were. All this took place in the same venue that is home to the L.A. Phil. Somehow, I can’t ever see the same kind of free-wheeling improvisation happening in that context. And, since I’m a geek, I started thinking about agility. Rollins has put together a quintet that reflects his own particular style and past. No upright bass or piano for Rollins – drums, bongos, electric guitar and bass guitar along with his sax. It’s not about the mix of instruments. Other trios, quartets, and sextets use different mixes of instruments. New Orleans jazz tends towards trombones instead of sax; some prefer cornet or trumpet. But no matter what the choice of instruments, size matters. Team sizes are something I’ve been thinking about for a while. We’re on a quest to rethink how our teams are organized. They just feel too big, too unwieldy. In fact, they really don’t feel like teams at all. Most of the time, they feel more like collections or people who happen to report to the same manager. I attribute this to a couple factors. One is over-specialization; we have a tendency to have people work in silos. Although the teams are product-focused, within them our developers are both generalists and specialists. On the one hand, we expect them to be able to build an entire vertical slice of the application; on the other hand, each developer tends to be responsible for the vertical slice. As a result, developers often work on their own piece of the puzzle, in isolation. This sort of feels like working on a jigsaw in a group – each person taking a set of colors and piecing them together to reveal a portion of the overall picture. But what inevitably happens when you go to meld all those pieces together? Inevitably, you have some sections that are too big to move easily. These sections end up falling apart under their own weight as you try to move them. Not only that, but there are other challenges – figuring out where that section fits, and how to tie it into the rest of the puzzle. Often, this is when you find a few pieces need to be added – these pieces are “glue,” if you will. The other issue that arises is due to the overhead of maintaining communications in a team. My mother, who worked in IT for around 30 years, once told me that 20% per team member is a good rule of thumb for maintaining communication. While this is a rule of thumb, it seems to imply that any team over about 6 people is going to become less agile simple because of the communications burden. Teams of ten or twelve seem like they fall into the philharmonic organizational model. Complicated pieces of music requiring dozens of players to all be on the same page requires a much different model than the jazz quintet. There’s much less room for improvisation, originality or freedom. (There are probably orchestral musicians who will take exception to this characterization; I’m calling it like I see it from the cheap seats.) And, there’s one guy up front who is running the show, whose job is to keep all of those dozens of players on the same page, to facilitate communications. Somehow, the orchestral model doesn’t feel much like a self-organizing team, either. The first violin may be the best violinist in the orchestra, but they don’t get to perform free-wheeling solos. I’ve never heard of an orchestra getting together for a jam session. But I have heard of teams that organize their work based on the developers available, rather than organizing the developers based on the work required. I have heard of teams where desired functionality is deferred – or worse yet, schedules are missed – because one critical person doesn’t have any bandwidth available. I’ve heard of teams where people simply don’t have the big picture, because there is too much communication overhead for everyone to be aware of everything that is happening on a project. I once heard Paul Rayner say something to the effect of “you have a process that is perfectly designed to give you exactly the results you have.” Given a choice, I want a process that’s much more like jazz than orchestral music. I want a process that doesn’t burden me with lots of forms and checkboxes and stuff. Give me the simplest, most lightweight process that will work – and a smaller team of the best developers I can find. This seems like the kind of process that will get the kind of result I want to be part of.

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • Conducting Effective Web Meetings

    - by BuckWoody
    There are several forms of corporate communication. From immediate, rich communications like phones and IM messaging to historical transactions like e-mail, there are a lot of ways to get information to one or more people. From time to time, it's even useful to have a meeting. (This is where a witty picture of a guy sleeping in a meeting goes. I won't bother actually putting one here; you're already envisioning it in your mind) Most meetings are pointless, and a complete waste of time. This is the fault, completely and solely, of the organizer. It's because he or she hasn't thought things through enough to think about alternate forms of information passing. Here's the criteria for a good meeting - whether in-person or over the web: 100% of the content of a meeting should require the participation of 100% of the attendees for 100% of the time It doesn't get any simpler than that. If it doesn't meet that criteria, then don't invite that person to that meeting. If you're just conveying information and no one has the need for immediate interaction with that information (like telling you something that modifies the message), then send an e-mail. If you're a manager, and you need to get status from lots of people, pick up the phone.If you need a quick answer, use IM. I once had a high-level manager that called frequent meetings. His real need was status updates on various processes, so 50 of us would sit in a room while he asked each one of us questions. He believed this larger meeting helped us "cross pollinate ideas". In fact, it was a complete waste of time for most everyone, except in the one or two moments that they interacted with him. So I wrote some code for a Palm Pilot (which was a kind of SmartPhone but with no phone and no real graphics, but this was in the days when we had just discovered fire and the wheel, although the order of those things is still in debate) that took an average of the salaries of the people in the room (I guessed at it) and ran a timer which multiplied the number of people against the salaries. I left that running in plain sight for him, and when he asked about it, I explained how much the meetings were really costing the company. We had far fewer meetings after. Meetings are now web-enabled. I believe that's largely a good thing, since it saves on travel time and allows more people to participate, but I think the rule above still holds. And in fact, there are some other rules that you should follow to have a great meeting - and fewer of them. Be Clear About the Goal This is important in any meeting, but all of us have probably gotten an invite with a web link and an ambiguous title. Then you get to the meeting, and it's a 500-level deep-dive on something everyone expects you to know. This is unfair to the "expert" and to the participants. I always tell people that invite me to a meeting that I will be as detailed as I can - but the more detail they can tell me about the questions, the more detailed I can be in my responses. Granted, there are times when you don't know what you don't know, but the more you can say about the topic the better. There's another point here - and it's that you should have a clearly defined "win" for the meeting. When the meeting is over, and everyone goes back to work, what were you expecting them to do with the information? Have that clearly defined in your head, and in the meeting invite. Understand the Technology There are several web-meeting clients out there. I use them all, since I meet with clients all over the world. They all work differently - so I take a few moments and read up on the different clients and find out how I can use the tools properly. I do this with the technology I use for everything else, and it's important to understand it if the meeting is to be a success. If you're running the meeting, know the tools. I don't care if you like the tools or not, learn them anyway. Don't waste everyone else's time just because you're too bitter/snarky/lazy to spend a few minutes reading. Check your phone or mic. Check your video size. Install (and learn to use)  ZoomIT (http://technet.microsoft.com/en-us/sysinternals/bb897434.aspx). Format your slides or screen or output correctly. Learn to use the voting features of the meeting software, and especially it's whiteboard features. Figure out how multiple monitors work. Try a quick meeting with someone to test all this. Do this *before* you invite lots of other people to your meeting.   Use a WebCam I'm not a pretty man. I have a face fit for radio. But after attending a meeting with clients where one Microsoft person used a webcam and another did not, I'm convinced that people pay more attention when a face is involved. There are tons of studies around this, or you can take my word for it, but toss a shirt on over those pajamas and turn the webcam on. Set Up Early Whether you're attending or leading the meeting, don't wait to sign on to the meeting at the time when it starts. I can almost plan that a 10:00 meeting will actually start at 10:10 because the participants/leader is just now installing the web client for the meeting at 10:00. Sign on early, go on mute, and then wait for everyone to arrive. Mute When Not Talking No one wants to hear your screaming offspring / yappy dog / other cubicle conversations / car wind noise (are you driving in a desert storm or something?) while the person leading the meeting is trying to talk. I use the Lync software from Microsoft for my meetings, and I mute everyone by default, and then tell them to un-mute to talk to the group. Share Collateral If you have a PowerPoint deck, mail it out in case you have a tech failure. If you have a document, share it as an attachment to the meeting. Don't make people ask you for the information - that's why you're there to begin with. Even better, send it out early. "But", you say, "then no one will come to the meeting if they have the deck first!" Uhm, then don't have a meeting. Send out the deck and a quick e-mail and let everyone get on with their productive day. Set Actions At the Meeting A meeting should have some sort of outcome (see point one). That means there are actions to take, a follow up, or some deliverable. Otherwise, it's an e-mail. At the meeting, decide who will do what, when things are needed, and so on. And avoid, if at all possible, setting up another meeting, unless absolutely necessary. So there you have it. Whether it's on-premises or on the web, meetings are a necessary evil, and should be treated that way. Like politicians, you should have as few of them as are necessary to keep the roads paved and public libraries open.

    Read the article

  • How I Work: Staying Productive Whilst Traveling

    - by BuckWoody
    I travel a lot. Not like some folks that are gone every week, mind you, although in the last month I’ve been to: Cambridge, UK; Anchorage, AK; San Jose, CA; Copenhagen, DK, Boston, MA; and I’m currently en-route to Anaheim, CA.  While this many places in a month is a bit unusual for me, I would say I travel frequently. I’ve travelled most of my 28+ years in IT, and at one time was a consultant traveling weekly.   With that much time away from my primary work location, I have to find ways to stay productive. Some might say “just rest – take a nap!” – but I’m not able to do that. For one thing, I’m a very light sleeper and I’ve never slept on a plane - even a 30+ hour trip to New Zealand in Business Class - so that just isn’t option. I also am not always in the plane, of course. There’s the hotel, the taxi/bus/train, the airport and then all that over again when I arrive. Since my regular jobs have many demands, I have to get work done.   Note: No, I’m not always focused on work. I need downtime just like everyone else. Sometimes I just think, watch a movie or listen to tunes – and I give myself permission to do that anytime – sometimes the whole trip. I have too fewheartbeats left in life to only focus on work – it’s just not that important, and neither am I. Some of these tasks are letters to friends and family, or other personal things. What I’m talking about here is a plan, not some task list I have to follow. When I get to the location I’m traveling to, I always build in as much time as I can to ensure I enjoy those sights and the people I’m with. I would find traveling to be a waste if not for that.   The Unrealistic Expectation As I would evaluate the trip I was taking – say a 6-8 hour flight – I would expect to get 10-12 hours of work done. After all, there’s the time at the airport, the taxi and so on, and then of course the time in the air with all of the room, power, internet and everything else I needed to get my work done. I would pile up tasks at home, pack my bags, and head happily to the magical land of the TSA.   Right. On return from the trip, I had accomplished little, had more e-mails and other work that had piled up, and I was tired, hungry, and unorganized. This had to change. So, I decided to do three things: Segment my work Set realistic expectations Plan accordingly  Segmenting By Available Resources The first task was to decide what kind of work I could do in each location – if any. I found that I was dependent on a few things to get work done, such as power, the Internet, and a place to sit down. Before I fly, I take some time at home to get all of the work I’d like to accomplish while away segmented into these areas, and print that out on paper, which goes in my suit-coat pocket along with a mechanical pencil. I print my tickets, and I’m all set for the adventure ahead. Then I simply do each kind of work whenever I’m in that situation. No power There are certain times when I don’t have power available. But not only that, I might not even be able to use most of my electronics. So I now schedule as many phone calls as I can for the taxi/bus/train ride and the airports as I can. I have a paper notebook (Moleskine, of course) and a pencil and I print out any notes or numbers I need prior to the trip. Once I’m airborne or at the airport, I work on my laptop. I check and respond to e-mails, create slides, write code, do architecture, whatever I can.  If I can’t use any electronics, or once the power runs out, I schedule time for reading. I can read at the airport or anywhere, actually, even in-flight or any other transport. I “read with a pencil”, meaning I take a lot of notes, which I liketo put in OneNote, but since in most cases I don’t have power, I use the Moleskine to do that. Speaking of which, sometimes as I’m thinking I come up with new topics, ideas, blog posts, or things to teach in my classes. Once again I take out the notebook and write it down. All of these notes get a check-mark when I get back to the office and transfer the writing to OneNote. I’ve tried those “smart pens” and so on to automate this, but it just never works out. Pencil and paper are just fine. As I mentioned, sometime I just need to think. I’ll do nothing, and let my mind wander, thinking of nothing in particular, or some math problem or science question I’m interested in. My only issue with this is that I communicate tothink, and I don’t want to drive people crazy by being that guy that won’t shut up, so I think in a different way. Power, but no Internet or Phone If I have power but no Internet or phone, I focus on the laptop and the tablet as before, and I also recharge my other gadgets. Power, Internet, Phone and a Place to Work At first I thought that when I arrived at the hotel or event I could get the same amount of work done that I do at the office. Not so. There’s simply too many distractions, things you need, or other issues that allow this. Of course, Ican work on any device, read, think, write or whatever, but I am simply not as productive as I am in my home office. So I plan for about 25-50% as much work getting done in this environment as I think I could really do. I’ve done some measurements, and this holds out to be true almost every time. The key is that I re-set my expectations (and my co-worker’s expectations as well) that this is the case. I use the Out-Of-Office notices to let people know that I’m just not going to be 100% at this time – it’s hard for everyone, but it’s more honest and realistic, and I’d rather they know that – and that I realize that – than to let them think I’m totally available. Because I’m not – I’m traveling. I don’t tend to put too much detail, because after all I don’t necessarily want to let people know when I’m not home :) but I do think it’s important to let people that depend on my know that I’ll get back with them later. I hope this helps you think through your own methodology of staying productive when you travel. Or perhaps you just go offline, and don’t worry about any of this – good for you! That’s completely valid as well.   (Oh, and yes, I wrote this at 35K feet, on Alaska Airlines on a trip. :)  Practice what you preach, Buck.)

    Read the article

  • All my sites are 403 but the server is running. Errors on startup

    - by Craig
    We gave access to a contractor to install a firewall and somehow while he was doing it he fracked something up. Everything went off-line about 24 hours ago and we are effectively out of business until I solve this and the person who messed up the thing is not returning calls. I found a few errors. First, I'm not a server guy - I can look at log files and normally everything runs fine. All 'services' are running according to 1and1 server monitoring and mail is being delivered just fine. The whole thing was off-line until I (probably stupidly) updated the kernel from 6.2 to 6.3 this morning and I got everything back except the http access. All the domains (~200 of them) are returning a 403 error and nothing is recorded in the access log. On every restart I see this error in the messages log file: init: Failed to spawn ttyS0 main process: unable to execute: No such file or directory and a little later these: kernel: WARNING: at kernel/sched.c:5914 thread_return+0x232/0x79d() (Not tainted) kernel: Hardware name: X9SCL/X9SCM kernel: Modules linked in: xt_iprange iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 ext4 jbd2 serio_raw i2c_i801 i2c_core sg iTCO_wdt iTCO_vendor_support e1000e ext3 jbd mbcache raid1 sd_mod crc_t10dif ahci dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan] kernel: Pid: 367, comm: md3_raid1 Not tainted 2.6.32-220.2.1.el6.x86_64 #1 kernel: Call Trace: kernel: [<ffffffff81069997>] ? warn_slowpath_common+0x87/0xc0 kernel: [<ffffffff810699ea>] ? warn_slowpath_null+0x1a/0x20 kernel: [<ffffffff814eccc5>] ? thread_return+0x232/0x79d kernel: [<ffffffff8126a4d9>] ? cpumask_next_and+0x29/0x50 kernel: [<ffffffff813e9c05>] ? md_super_wait+0x55/0x90 kernel: [<ffffffff81090a10>] ? autoremove_wake_function+0x0/0x40 kernel: [<ffffffff813ebf46>] ? md_update_sb+0x206/0x3f0 kernel: [<ffffffff813ee922>] ? md_check_recovery+0x3f2/0x6d0 kernel: [<ffffffffa005b129>] ? raid1d+0x49/0x1050 [raid1] kernel: [<ffffffff814ed985>] ? schedule_timeout+0x215/0x2e0 kernel: [<ffffffff814ef447>] ? _spin_unlock_irqrestore+0x17/0x20 kernel: [<ffffffff813eb336>] ? md_thread+0x116/0x150 kernel: [<ffffffff81090a10>] ? autoremove_wake_function+0x0/0x40 kernel: [<ffffffff813eb220>] ? md_thread+0x0/0x150 kernel: [<ffffffff810906a6>] ? kthread+0x96/0xa0 kernel: [<ffffffff8100c14a>] ? child_rip+0xa/0x20 kernel: [<ffffffff81090610>] ? kthread+0x0/0xa0 kernel: [<ffffffff8100c140>] ? child_rip+0x0/0x20 And something is wrong with the Named/BIND resulting in the same error for all domains: zone DOMAINEXAMPLE.com/IN: loading from master file DOMAINEXAMPLE.com failed: file not found zone DOMAINEXAMPLE.com/IN: not loaded due to errors. _default/DOMAINEXAMPLE.com/IN: file not found I'm pretty sure this is not enough information to solve the problem, but I'm willing to engage someone who can work this out for me. Any help would be greatly appreciated.

    Read the article

  • How to implement smooth flocking

    - by Craig
    I'm working on a simple survival game, avoid the big guy and chase the the small guys to stay alive for as long as possible. I have taken the chase and evade example from MSDN create and drawn 20 mice on the screen. I want the small guys to flock when they arent evading. They are doing this, but it isnt as smooth as I would like it to be. How do i make the movement smoother? Its very jittery.# Below is what I have going at the moment, flocking code is within the IF statement, when it isnt set to evading. Any help would be greatly appreciated! :) namespace ChaseAndEvade { class MouseSprite { public enum MouseAiState { // evading the cat Evading, // the mouse can't see the "cat", and it's wandering around. Wander } // how fast can the mouse move? public float MaxMouseSpeed = 4.5f; // and how fast can it turn? public float MouseTurnSpeed = 0.20f; // MouseEvadeDistance controls the distance at which the mouse will flee from // cat. If the mouse is further than "MouseEvadeDistance" pixels away, he will // consider himself safe. public float MouseEvadeDistance = 100.0f; // this constant is similar to TankHysteresis. The value is larger than the // tank's hysteresis value because the mouse is faster than the tank: with a // higher velocity, small fluctuations are much more visible. public float MouseHysteresis = 60.0f; public Texture2D mouseTexture; public Vector2 mouseTextureCenter; public Vector2 mousePosition; public MouseAiState mouseState = MouseAiState.Wander; public float mouseOrientation; public Vector2 mouseWanderDirection; int separationImpact = 4; int cohesionImpact = 6; int alignmentImpact = 2; int sensorDistance = 50; public void UpdateMouse(Vector2 position, MouseSprite [] mice, int numberMice, int index) { Vector2 catPosition = position; int enemies = numberMice; // first, calculate how far away the mouse is from the cat, and use that // information to decide how to behave. If they are too close, the mouse // will switch to "active" mode - fleeing. if they are far apart, the mouse // will switch to "idle" mode, where it roams around the screen. // we use a hysteresis constant in the decision making process, as described // in the accompanying doc file. float distanceFromCat = Vector2.Distance(mousePosition, catPosition); // the cat is a safe distance away, so the mouse should idle: if (distanceFromCat > MouseEvadeDistance + MouseHysteresis) { mouseState = MouseAiState.Wander; } // the cat is too close; the mouse should run: else if (distanceFromCat < MouseEvadeDistance - MouseHysteresis) { mouseState = MouseAiState.Evading; } // if neither of those if blocks hit, we are in the "hysteresis" range, // and the mouse will continue doing whatever it is doing now. // the mouse will move at a different speed depending on what state it // is in. when idle it won't move at full speed, but when actively evading // it will move as fast as it can. this variable is used to track which // speed the mouse should be moving. float currentMouseSpeed; // the second step of the Update is to change the mouse's orientation based // on its current state. if (mouseState == MouseAiState.Evading) { // If the mouse is "active," it is trying to evade the cat. The evasion // behavior is accomplished by using the TurnToFace function to turn // towards a point on a straight line facing away from the cat. In other // words, if the cat is point A, and the mouse is point B, the "seek // point" is C. // C // B // A Vector2 seekPosition = 2 * mousePosition - catPosition; // Use the TurnToFace function, which we introduced in the AI Series 1: // Aiming sample, to turn the mouse towards the seekPosition. Now when // the mouse moves forward, it'll be trying to move in a straight line // away from the cat. mouseOrientation = ChaseAndEvadeGame.TurnToFace(mousePosition, seekPosition, mouseOrientation, MouseTurnSpeed); // set currentMouseSpeed to MaxMouseSpeed - the mouse should run as fast // as it can. currentMouseSpeed = MaxMouseSpeed; } else { // if the mouse isn't trying to evade the cat, it should just meander // around the screen. we'll use the Wander function, which the mouse and // tank share, to accomplish this. mouseWanderDirection and // mouseOrientation are passed by ref so that the wander function can // modify them. for more information on ref parameters, see // http://msdn2.microsoft.com/en-us/library/14akc2c7(VS.80).aspx ChaseAndEvadeGame.Wander(mousePosition, ref mouseWanderDirection, ref mouseOrientation, MouseTurnSpeed); // if the mouse is wandering, it should only move at 25% of its maximum // speed. currentMouseSpeed = .25f * MaxMouseSpeed; Vector2 separate = Vector2.Zero; Vector2 moveCloser = Vector2.Zero; Vector2 moveAligned = Vector2.Zero; // What the AI does when it sees other AIs for (int j = 0; j < enemies; j++) { if (index != j) { // Calculate a vector towards another AI Vector2 separation = mice[index].mousePosition - mice[j].mousePosition; // Only react if other AI is within a certain distance if ((separation.Length() < this.sensorDistance) & (separation.Length()> 0) ) { moveAligned += mice[j].mouseWanderDirection; float distance = Math.Abs(separation.Length()); if (distance == 0) distance = 1; moveCloser += mice[j].mousePosition; separation.Normalize(); separate += separation / distance; } } } if (moveAligned.LengthSquared() != 0) { moveAligned.Normalize(); } if (moveCloser.LengthSquared() != 0) { moveCloser.Normalize(); } moveCloser /= enemies; mice[index].mousePosition += (separate * separationImpact) + (moveCloser * cohesionImpact) + (moveAligned * alignmentImpact); } // The final step is to move the mouse forward based on its current // orientation. First, we construct a "heading" vector from the orientation // angle. To do this, we'll use Cosine and Sine to tell us the x and y // components of the heading vector. See the accompanying doc for more // information. Vector2 heading = new Vector2( (float)Math.Cos(mouseOrientation), (float)Math.Sin(mouseOrientation)); // by multiplying the heading and speed, we can get a velocity vector. the // velocity vector is then added to the mouse's current position, moving him // forward. mousePosition += heading * currentMouseSpeed; } } }

    Read the article

  • Blogging locally and globally–my experience

    - by DigiMortal
    In Baltic MVP Summit 2011 there was discussion about having two blogs - one for local and another for global audience – and how to publish once written information in these blogs. There are many ways how to optimize your blogging activities if you have more than one audience and here you can find my experiences, best practices and advices about this topic. My two blogs I have to working blogs: this one here technology and programming blog for local market My local blog is almost five years old and it makes it one of the oldest company blogs in Estonia. It is still active and I write there as much as I have time for it. This blog here is active since September 2007, so it is about 3.5 years old right now. Both of these blogs are  my major hits in my MVP carrier and they have very good web statistics too. My local blog My local blog is about programming, web and technology. It has way wider target audience then this blog here has. By example, in my local blog I blog also about local events, cool new concept phones, different webs providing some interesting services etc. But local guys can find there also my postings about how to solve one or another programming problem and postings about Microsoft technologies I am playing with. This far my local blog has a lot of readers for such a small country that Estonia is. This blog has made me a lot of cool contacts and I have had there a lot of interesting discussions about different technical topics. Why I started this blog? Living in small country is different than living in big country. In small country you have less people and therefore smaller audience so you have to target more than one technical topic to find enough readers. In a same time you are still interested in your main topics and you want to reach to more people who are sharing same interests with you. Practically one day y will grow out from local market and you go global. This is how this blog was born. Was it worth to create, promote and mess with it? Every second I have put on my time to this blog has been worth of it. Thanks to this blog I have found new good friends and without them I think it is more boring to work on different problems and solutions. Defining target audiences One thing you should always do when having more than one blog is defining target audiences. If you are just technomaniac interested in sharing your stuff and make some new friends and have something to write to your MVP nomination form then you don’t have to go through complex targeting process. You can do it simple way and same effectively. Here is how I defined target audiences to my blogs: local blog – reader of my local blog is IT professional, software developer, technology innovator or just some guy who is interested in technology,   this blog – reader of this blog is experienced professional software developer who works on Microsoft technologies or software developer who is open minded and open to new technologies and interesting solutions to development problems. You can see how local blog – due to small market with less people – has wider definition for audience while this blog is heavily targeted to Microsoft technologies and specially to software development. On practical side these decisions are also made well I think because it is very hard to build up popular common IT blog. On global level it is better to target some specific niche and find readers who are professionals on your favorite topics. Thanks to this blog I have found new friends who are professional developers and I am very happy about all the discussions I have had with them. Publishing content to different blogs My local blog and this blog have some overlapping topics like .NET, databases and SEO. Due to this overlapping there is question: when I write posting to my local blog then should I have to publish same thing in my global blog? And if I write something to my global blog then should I publish same thing also in my local blog? Well, it really depends on the definition of your target audiences. If they match then of course it is good idea to translate you post and publish it also to another blog. But if you have different audiences then you may need to modify your posting before publishing it. The questions you have to answer are: is target audience interested in this topic? is target audience expecting more specific and deeper handling of this topic or are they expecting more general handling of topic? is the problem you are discussing actual for target audience or not? You have to answer these questions and after that make your decision. If you need to modify your original posting then take some time and do it. Provide quality to all your readers because they will respect you if you respect them. Cross-posting and referencing It is tempting to save time that preparing some blog post takes and if you have are done with posting in one blog it may seem like good idea to make short posting to another blog and add reference to first one where topic is discussed longer. Well, don’t do it – all your readers expect good quality content from you and jumping from one blog post to another is disturbing for them. Of course, there is problem with differences between target audiences. You may have wider target audience and some people may be interested in more specific handling of topic. In this case feel free to refer your blog you are writing in english. This is not working very well in opposite direction because almost all my global blog readers understand english but not estonian. By example, estonian language is complex one and online translating tools make very poor translations from estonian language. This is why I don’t even plan to publish postings here that refer to my local blog for more information. I am keeping these two blogs as two different worlds and if there is posting that fits well to both blogs I will write my posting to one blog and then answer previous three questions before posting same thing to another blog. Conclusion Growing out of your local market is not anything mysterious if you are living in small country. As it is harder to find people there who are interested in same topics with you then sooner or later you will start finding these new contacts from global audience. Global audience is bigger and to be visible there you must provide high quality content to your audience. It is something you will learn over time and you will learn every day something new when you are posting to your global blog. You may ask: if global blog is much more complex thing to do then is it worth to do at all? My answer is: yes, do it for sure. It is not easy thing to do when you start but if you work on your global blog and improve it over time you will get over all obstacles pretty soon. Just don’t forget one thing – content is king and your readers expect high quality from you.

    Read the article

  • Installing XP through USB-flash disc

    - by Crazy Buddy
    I don't know whether this could be asked here... So, Pardon me for this. Probably, this is based on My laptop and a contradiction to this question asked already here... I tried to format my "government-provided" laptop (No CD-drive). I thought those IT guys are proving that they're too smart..! I have the Windows XP CD right now. I didn't like to stick with some home-made OS from our Government. So, I used another laptop to format the govt. thing and tried to install XP (As I didn't have enough bills to invest on Windows 7 or 8). Case 1: First, I allowed WinSetupFromUSB 1.0 beta 8 to deal with the flash disk. I wondered for the first time that XP text-screen appeared. Using the first part, I formatted my laptop. It started to copy files, entered into the next part, and completed the installation. I started my PC for the first time. XP splash screen appeared. Suddenly, a blue screen flashed and disappeared (I can't even read what it says). Rebooted and arrived at the screen, "Start Windows Normally". It happens and happens still - like an infinite loop :-) Case 2: Next, I used Rufus 1.2.0 to transfer files to my Flash and it screwed everything out. Even if I used Flash to boot, it arrives to the same screen "Start Windows normally". It doesn't show any response of Flash being inserted. Then I recognized that, It's simply copies everything to the flash disk. Case 3: Then, I started with Novicorp WinToFlash (giving utmost priority to this site). I booted with the disk. I entered into the first part - "Text mode". Some lines started running like that "Press F6 if you..." like that. The last thing I saw was, "Setup is starting Windows..." Suddenly a blue screen appeared like this captured one. I've a suspicion that the same screen appears again & again in first case. Man, I'm dead. Case 4: For the sake of my last hope, I used WinSetupFromUSB 0.1.1. I was shocked on arriving at a screen which says something "GRUB4DOS" like that and some commands like {command line, reboot, halt, \find menu.lst} and when I go inside those "find" options, I see "Error:15 - File not found". Googling provided some commands to mount SETUPLDR.BIN file in the "grub" thing which also proved unsuccessful... Some sites say that Factory reset uses only some function keys. A guy said that it's F11 for lenovo. Screw him. It's all a waste-of-time. But, I think SE would help me out. Is our government IT guys doin' this to me? Are they Soooo smart to spark some blue screen in front of me to freak me out? Any suggestions or new (useful) USB transferring things would be appreciated. It's very urgent. So, It'd be better if you guys pay some attention in debugging and help me out..? Thanks for your time guys :-)

    Read the article

  • Installing .NET application on IIS 7.5 issues

    - by Juw
    Really need some help here. I am at a loss. I am trying to install a webservice that some other guy wrote in .NET. I have some basic IIS understanding. The webservice works just fine on my dev computer. But now i try to move the webservice to a production server and bad things happens. The webservice has been located in C:\inetpub\wwwroot\ dir on the dev server. But on this production server it is to be located in D:\services\ I have managed to install an application on the production server and everything seems fine and dandy. But when i "Test Settings" in the initial setup i get "Invalid application path" error. But i can just close it down and still install it. But when i try to access the webservice with: http://myserver.com/webservice/GetData nothing happens. Just a blank page and when i check the response headers...500 error. I don´t know what is going on here or where the problem is. I post the config file here so someone hopefully might notice something odd. Thanx in advance! EDIT: The config file is from my dev server. I just copied it to my production server...but that obviously didn´t work :-) UPDATE: I noticed that my dev server run in an Application pool with Net 4 and in "classic" "mode". On the production server it was in NET 4 but in "integrated" mode. So i changed it to "classic". I still get a blank page. But checking the log will output this: 2012-10-03 14:57:00 ip removed GET /boo/GetData - 80 - ip removed Mozilla/5.0+(Windows+NT+6.1;+WOW64;+rv:15.0)+Gecko/20100101+Firefox/15.0.1 404 2 1260 203 <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <identity impersonate="true" /> <!-- Impersonate NT AUTHORITY/IUSR --> <compilation targetFramework="4.0"> <assemblies> <add assembly="System.Data.Entity, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b7735c561131e089" /> </assemblies> </compilation> <pages controlRenderingCompatibilityVersion="3.5" clientIDMode="AutoID" /> </system.web> <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> <httpErrors existingResponse="PassThrough" /> <httpProtocol> <customHeaders> <add name="Access-Control-Allow-Origin" value="*" /> </customHeaders> </httpProtocol> <directoryBrowse enabled="false" /> </system.webServer> <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <standardEndpoints> <webHttpEndpoint> <!-- Configure the WCF REST service base address via the global.asax.cs file and the default endpoint via the attributes on the <standardEndpoint> element below --> <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true" /> </webHttpEndpoint> </standardEndpoints> </system.serviceModel> <connectionStrings> <add name="Entities" connectionString="metadata=res://*/DataModel.csdl|res://*/DataModel.ssdl|res://*/DataModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;data source=someip;initial catalog=db_90;User ID=user1;Password=access2;multipleactiveresultsets=True;App=EntityFramework&quot;" providerName="System.Data.EntityClient" /> </connectionStrings> </configuration>

    Read the article

  • Setting environment variables in OS X

    - by Percival Ulysses
    Despite the warning that questions that can be answered are preferred, this question is more a request for comments. I apologize for this, but I feel that it is valuable nonetheless. The problem to set up environment variables such that they are available for GUI applications has been around since the dawn of Mac OS X. The solution with ~/.MacOSX/environment.plist never satisfied me because it was not reliable, and bash style globbing wasn't available. Another solution is the use of Login Hooks with a suitable shell script, but these are deprecated. The Apple approved way for such functionality as provided by login hooks is the use of Launch Agents. I provided a launch agent that is located in /Library/LaunchAgents/: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>user.conf.launchd</string> <key>Program</key> <string>/Users/Shared/conflaunchd.sh</string> <key>ProgramArguments</key> <array> <string>~/.conf.launchd</string> </array> <key>EnableGlobbing</key> <true/> <key>RunAtLoad</key> <true/> <key>LimitLoadToSessionType</key> <array> <string>Aqua</string> <string>StandardIO</string> </array> </dict> </plist> The real work is done in the shell script /Users/Shared/conflaunchd.sh, which reads ~/.conf.launchd and feeds it to launchctl: #! /bin/bash #filename="$1" filename="$HOME/.conf.launchd" if [ ! -r "$filename" ]; then exit fi eval $(/usr/libexec/path_helper -s) while read line; do # skip lines that only contain whitespace or a comment if [ ! -n "$line" -o `expr "$line" : '#'` -gt 0 ]; then continue; fi eval launchctl $line done <"$filename" exit 0 Notice the call of path_helper to get PATH set up right. Finally, ~/.conf.launchd looks like that setenv PATH ~/Applications:"${PATH}" setenv TEXINPUTS .:~/Documents/texmf//: setenv BIBINPUTS .:~/Documents/texmf/bibtex//: setenv BSTINPUTS .:~/Documents/texmf/bibtex//: # Locale setenv LANG en_US.UTF-8 These are launchctl commands, see its manpage for further information. Works fine for me (I should mention that I'm still a Snow Leopard guy), GUI applications such as texstudio can see my local texmf tree. Things that can be improved: The shell script has a #filename="$1" in it. This is not accidental, as the file name should be feeded to the script by the launch agent as an argument, but that doesn't work. It is possible to put the script in the launch agent itsself. I am not sure how secure this solution is, as it uses eval with user provided strings. It should be mentioned that Apple intended a somewhat similar approach by putting stuff in ~/launchd.conf, but it is currently unsupported as to this date and OS (see the manpage of launchd.conf). I guess that things like globbing would not work as they do in this proposal. Finally, I would mention the sources I used as information on Launch Agents, but StackExchange doesn't let me [1], [2], [3]. Again, I am sorry that this is not a real question, I still hope it is useful.

    Read the article

  • Can spliting an access database cause printer and reporting issues?

    - by leeand00
    We have a setup in which our users log into an access database using MS Access 2003 over an RDP connection. The user's login to their own machines first using a roaming profile. They then click an rdp connection file on the desktop and login to the remote server, via RDP, where they use MS Access as the shell; they don't have any access to any of explorer.exe features such as the start menu. The database they are logging into is more of an application, and provides functionality for entering data, querying data, and running reports via form based menus. It all worked pretty well until we split the database as it was nearing 2GBs in size. We moved out the payroll data into a separate partition, a database with the same name in a different folder, both of them on the server. Only two tables were moved into this new database partition, and they were re-linked as external tables in the new partition. Now while everything appears to be working fine data-wise after the split, there's a new issue when our users login via RDP and attempt to run reports: often the report will not display and instead the user sees an error about the click event of the form. At first I didn't even know it was printer-related, as we didn't really change anything related to the printers as far as I knew. Confused about the error, I talked to the guy who previously worked here and who was in charge of splitting the database, and he told me to tell the users to set their default printers (on their local machines, not on the server) to the "printer" Microsoft XPS Document Writer which isn't a physical printer at all. This allowed the user's to display their reports, but if they want to print out reports, they are required to go to the File menu and select Print, clicking the print icon on the toolbar takes them to a Save As... dialog as would be expected when using the Microsoft XPS Document Writer as your default printer. It's easy to tell if the user is having a problem because a quick mouseover of the printer icon will yield a tooltip of (none) when they cannot access their reports, and a tooltip of Microsoft XPS Document Writer when they can view the reports. If the user's printer is set to anything other than Microsoft XPS Document Writer as the default on their local machine, then (none) is always displayed when they rdp to the database. The RDP settings are setup to transfer the local printer to the server. Telling the users to do this to print has been more of a band-aid on the whole situation until we find a better solution and an explanation as to why splitting a database would prevent users from printing or even viewing access database reports. Which is why I'm here asking this question. Also of note all the printers on the network now show up on the server so that when the users do click File->Print to print their reports on a physical printer, they have to look through a huge list of printers to find theirs in the dropdown. So the little band-aid fix we have is not ideal. Previously, only the printers on the user's local machine displayed here, and not all the printers on the network. My co-worker seems to think this has something to do with permissions, I personally think it has to do with roaming profiles, and Group Policies which is what I've been reading up on. I really don't know how to fix this or how it is related to splitting the database.

    Read the article

  • My 2009 MacBook Logic board failed - options to proceed and how difficult?

    - by user181061
    Scannerz just gave my MacBook logic board a big fat F! I upgraded from Snow Leopard to Mountain Lion about 3 weeks ago. The system was running short of memory so I upgraded it. The system was running fine for about 2 weeks. Yesterday the thing started acting erratic. A lot of spinning beach balls, delays, and then some errors saying files couldn't be read to or from the drive. I figured the drive was going because the system is over 3 years old. I ran Scannerz on it and it indicated a lot of errors and irregularities. I rescanned it in cursory mode, and none of them were repeatable, just showing up all over the place in different regions of the scan. I went through the docs and they implied either an I/O cable was bad, a connection was damaged, or the logic board was bad. I tossed on my backup of Snow Leopard that I cloned from the original hard drive because I figured Mountain Lion was to blame and booted from the USB drive with the clone on it. It wasn't. I performed scans on every single port, and errors and irregularities that couldn't be repeated were showing up on every single one of them. I then, for kicks, put a CD into the CD player. Scannerz doesn't test optical drives but I figured surely that will work. No it won't. More spinning beach balls and messages telling me it can't be read. It was working fine 3 days ago. I know a lot of people don't like MacBook's, but mine's been great, at least until now. It was working great even with Mountain Lion after the upgrade. The system is a mid-2009 MacBook. In my opinion, it's a complete waste to toss this system. The display is too good, the keyboard works great, and it still looks good, plus this type of MacBook still uses the FireWire 400 port and I use that for Time Machine backups. I've tried reseating the RAM, it didn't do anything. I shut the system down and put in the old RAM, booted to Snow Leopard, and the problems persist. Here are my questions: The Scannerz documentation somewhere said something about the Airport card not being seated properly, but when I go to iFixit, it's apparent, at least I think it's apparent, that this isn't a slot type Airport card that the user can easily install or remove. If the cables or connections to the Airport card are bad, could they be causing this problem. How about any other connections that can be intermittent, failing or erratic? Any type of resets that I could possibly do to get rid of this? For any of those that have replaced a logic board on a MacBook, if this really is the culprit, are there any "gotcha's" I need to be aware of? As an FYI, I replaced the hard drive on an old iBook @500MHz that I had a long time ago, and I replaced the drive on a 1.33GHz PowerBook about 6 years ago. You have to be careful, but using some of the info on web sites like iFixit it's not that hard. Time consuming, but not that hard. The Intel based MacBook's to me look like they're easier to service than either of those. I'm thinking about getting a unit off of eBay that matches mine but has something else wrong with it, like a busted display. I REFUSE to buy a new system. A guy at my office has a 2007 Mac Pro and he can't upgrade to Mountain Lion because his system is "obsoleted." That's ridiculous. If you pay nearly $7,500 for a system it shouldn't be trash just because Apple decides they don't have enough money (sorry for the soap box, but it's true, IMO!) Any input is appreciated.

    Read the article

  • How to use BCDEdit to dual boot Windows installations?

    - by Ian Boyd
    What are the bcdedit commands necessary to setup dual boot between different installations of Windows?5 Background i recently installed Windows 8 onto a separate hard drive1. Now that Windows 8 in installed i want to dual-boot back to Windows 7. i have my two2 hard drives: So you can see that i have my two disks, with the partitions containing Windows: Windows 7: \\PhysicalDisk0 (partition 03) Windows 8: \\PhysicalDisk2 (partition 1) What i'm trying to figure out how is how to use bcdedit to instruct the thing that boots Windows that there is another Windows installation out there. Running bcdedit now, it shows current configuration: C:\WINDOWS\system32>bcdedit Windows Boot Manager -------------------- identifier {bootmgr} device partition=\Device\HarddiskVolume2 description Windows Boot Manager locale en-US inherit {globalsettings} integrityservices Enable default {current} resumeobject {ce153eb7-3786-11e2-87c0-e740e123299f} displayorder {current} toolsdisplayorder {memdiag} timeout 30 Windows Boot Loader ------------------- identifier {current} device partition=C: path \WINDOWS\system32\winload.exe description Windows 8 locale en-US inherit {bootloadersettings} recoverysequence {ce153eb9-3786-11e2-87c0-e740e123299f} integrityservices Enable recoveryenabled Yes allowedinmemorysettings 0x15000075 osdevice partition=C: systemroot \WINDOWS resumeobject {ce153eb7-3786-11e2-87c0-e740e123299f} nx OptIn bootmenupolicy Standard hypervisorlaunchtype Auto i cannot find any documentation on the difference between Windows Boot Manager and Windows Boot Loader. Documentation There is some documentation on Bcdedit: Technet: Command Line Reference - Bcdedit Technet: Windows Automated Installation Kit - BCDEdit Command Line Options Whitepaper - BCDEdit Commands for Boot Environment (Word Document) But they don't explain how edit the binary boot configuration data If i had to guess, i would think that a Windows Boot Manager instructs the BIOS what program it should run. That program would give the user a set of boot choices. That leaves Windows Boot Loader do be a particular boot choice, that represents a particular installation of Windows. If that is the case i would need to create a new Windows Boot Loader entry. This means i might want to use the /create parameter: /create Creates a new boot entry: bcdedit [/store filename] /create [id] /d description [/application apptype | /inherit [apptype] | /inherit DEVICE | /device] So i assume a syntax of: >bcdedit /create /d "The old Windows 7" /application osloader Where application can be one of the following types: Apptype Description BOOTSECTOR The boot sector application OSLOADER The Windows boot loader RESUME A resume application Unfortunately, the only documentation about osloader is "The Windows boot loader". i don't see how that can differentiate between Windows 8 on one hard drive, and Windows 7 on another. The other possible parameter when /create a boot loader is >bcdedit /create /D "Windows Vista" /device "The Quick Brown Fox" Unfortunately the documentation is missing for /device: /device Optional. If id is not set to a well-known identifier, the option that is used to specify the new boot entry as an additional device options entry. Since i did not set id to a well-known identifier, i must set /device to "the option that is used to specify the new boot entry as an additional device options entry". i know all those words; they're all English. But i have on idea what it is saying; those words in that order seem nonsensical. So i'm somewhat stymied. i don't want to be like Dan Stolts from Microsoft: I found no content that was particularly helpful when I hosed my machine by playing with BCDEdit. This post would have been ok if there was much more detail especially on the /set command OSDevice, etc. So once I got my machine fixed, I documented the solution and the information is here.... i mean, if a Microsoft guy can't even figure out how to use BCDEdit to edit his BCD, then what chance to i have? Bonus Reading BCDEdit Command-Line Options Bcdedit Server 2008 R2 or Windows 7 System Will NOT Boot After Making Changes To Boot Manager Using BCDEdit Visual BCD Editor4 Windows 7 and Windows 8 RTM Dual Boot Setup Footnotes 1 Since the Windows 8 installer would have damaged my Windows 7 install, i decided to unplug my "main" hard drive during the install. Which is a long-winded explanation of why the Windows 8 installer didn't detect the existing Windows 7 install. Normally the installer would have automatically created the required entries for dual-boot. Not that the reason i'm asking the question is important. 2 Really there's three drives, but the third is just bulk storage. The existence of a 3rd hard drive is irrelevant to the question. i only mention it in case someone wants to know why the screenshot has 3 hard drives when i only mention two. 3 i arbitrarily started numbering partitions at "zero"; not to imply that partitions are numbered starting at zero. i only mention partitions because i don't see how any boot-loader could do its job without knowing which partition, and which folder, an installation of Windows is located in. 4 i'm asking about BCDEdit. i tried Visual BCD Editor. It seems to be a visual BCD editor. That is to say that it's a GUI, but still uses the same terminology as BCDEdit, and requires the same knowledge that BCD doesn't document. 5 For simplicity sake we'll assume that all installation of Windows i want to dual-boot between are Windows Vista or later, making them all compatible with the BCDEdit and the binary boot loader. The alternative would require delving into the intricacies of the old ntloader. Nor am i asking about dual booting to Linux; or how to boot to a Virtual Hard Drive (vhd) image. Just modern versions of Windows on existing hard drives in the same machine. Note: You can ignore everything after the word Background. It's all pointless exposition to satisfy some people's need for "research effort" before they'll consider being helpful. Some people have even been known to summarily close questions unless there is research effort. Some people have been know to close questions if there is too much research effort. Some people close questions when i put the note saying that they can ignore everything after the Background out of spite. Some people are just grumpy.

    Read the article

  • Performance required to improve Windows Experience Index?

    - by Ian Boyd
    Is there a guide on the metrics required to obtain a certain Windows Experience Index? A Microsoft guy said in January 2009: On the matter of transparency, it is indeed our plan to disclose in great detail how the scores are calculated, what the tests attempt to measure, why, and how they map to realistic scenarios and usage patterns. Has that amount of transparency happened? Is there a technet article somewhere? If my score was limited by my Memory subscore of 5.9. A nieve person would suggest: Buy a faster RAM Which is wrong of course. From the Windows help: If your computer has a 64-bit central processing unit (CPU) and 4 gigabytes (GB) or less random access memory (RAM), then the Memory (RAM) subscore for your computer will have a maximum of 5.9. You can buy the fastest, overclocked, liquid-cooled, DDR5 RAM on the planet; you'll still have a maximum Memory subscore of 5.9. So in general the knee-jerk advice "buy better stuff" is not helpful. What i am looking for is attributes required to achieve a certain score, or move beyond a current limitation. The information i've been able to compile so far, chiefly from 3 Windows blog entries, and an article: Memory subscore Score Conditions ======= ================================ 1.0 < 256 MB 2.0 < 500 MB 2.9 <= 512 MB 3.5 < 704 MB 3.9 < 944 MB 4.5 <= 1.5 GB 5.9 < 4.0GB-64MB on a 64-bit OS Windows Vista highest score 7.9 Windows 7 highest score Graphics Subscore Score Conditions ======= ====================== 1.0 doesn't support DX9 1.9 doesn't support WDDM 4.9 does not support Pixel Shader 3.0 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 7.9 Windows 7 highest score Gaming graphics subscore Score Result ======= ============================= 1.0 doesn't support D3D 2.0 supports D3D9, DX9 and WDDM 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 6.0-6.9 good framerates (e.g. 40-50fps) at normal resoltuions (e.g. 1280x1024) 7.0-7.9 even higher framerates at even higher resolutions 7.9 Windows 7 highest score Processor subscore Score Conditions ======= ========================================================================== 5.9 Windows Vista highest score 6.0-6.9 many quad core processors will be able to score in the high 6 low 7 ranges 7.0+ many quad core processors will be able to score in the high 6 low 7 ranges 7.9 8-core systems will be able to approach 8.9 Windows 7 highest score Primary hard disk subscore (note) Score Conditions ======= ======================================== 1.9 Limit for pathological drives that stop responding when pending writes 2.0 Limit for pathological drives that stop responding when pending writes 2.9 Limit for pathological drives that stop responding when pending writes 3.0 Limit for pathological drives that stop responding when pending writes 5.9 highest you're likely to see without SSD Windows Vista highest score 7.9 Windows 7 highest score Bonus Chatter You can find your WEI detailed test results in: C:\Windows\Performance\WinSAT\DataStore e.g. 2011-11-06 01.00.19.482 Disk.Assessment (Recent).WinSAT.xml <WinSAT> <WinSPR> <DiskScore>5.9</DiskScore> </WinSPR> <Metrics> <DiskMetrics> <AvgThroughput units="MB/s" score="6.4" ioSize="65536" kind="Sequential Read">89.95188</AvgThroughput> <AvgThroughput units="MB/s" score="4.0" ioSize="16384" kind="Random Read">1.58000</AvgThroughput> <Responsiveness Reason="UnableToAssess" Kind="Cap">TRUE</Responsiveness> </DiskMetrics> </Metrics> </WinSAT> Pre-emptive snarky comment: "WEI is useless, it has no relation to reality" Fine, how do i increase my hard-drive's random I/O throughput? Update - Amount of memory limits rating Some people don't believe Microsoft's statement that having less than 4GB of RAM on a 64-bit edition of Windows doesn't limit the rating to 5.9: And from xxx.Formal.Assessment (Recent).WinSAT.xml: <WinSPR> <LimitsApplied> <MemoryScore> <LimitApplied Friendly="Physical memory available to the OS is less than 4.0GB-64MB on a 64-bit OS : limit mem score to 5.9" Relation="LT">4227858432</LimitApplied> </MemoryScore> </LimitsApplied> </WinSPR> References Windows Vista Team Blog: Windows Experience Index: An In-Depth Look Understand and improve your computer's performance in Windows Vista Engineering Windows 7 Blog: Engineering the Windows 7 “Windows Experience Index”

    Read the article

  • Remote host: can tracert, can telnet, can*not* browse: what gives?

    - by MacThePenguin
    One of my customers of the company I work for has made a change to their Internet connection, and now we can't connect to them any more from our LAN. To help me troubleshoot this issue, the network guy on the customer's site has configured their firewall so that a HTTPS connection to their public IP address is open to any IP. I should put https://<customer's IP> in my browser and get a web page. Well, it works from any network I've tried (even from my smartphone), just not from my company's LAN. I thought it may be an issue with our firewall (though I checked its rules and it allows outbound TCP port 443 to anywhere), so I just connected a PC directly to the network connection of our provider, bypassing out firewall completely, and still it didn't work (everything else worked). So I asked for help to our Internet provider's customer service, and they asked me to do a tracert to our customer's IP. The tracert is successful, as the final hop shown in the output is the host I want to reach. So they said there's no problem. :( I also tried telnet <customer's IP> 443 and that works as well: I get a blank page with the cursor blinking (I've tried using another random port and that gives me an error message, as it should). Still, from any browser of any PC in my LAN I can't open that URL. I tried checking the network traffic with Wireshark: I see the packages going through and answers coming back, thought the packets I see passing are far less than they are if I successfully connect to another HTTPS website. See the attached screenshot: I had to blur the IPs, anyway the longer string is my PC's local IP address, the shorter one is the customer's public IP. I don't know what else to try. This is the only IP doing this... Any idea what could I try to find a solution to this issue? Thanks, let me know if you need further details. Edit: when I say "it doesn't work" I mean: the page doesn't open, the browser keeps loading for a long time and eventually shows an error saying that the page cannot be opened. I'm not in my office now so I can't paste the exact message, but it's the usual message you get when the browser reaches its timeout. When I say "it works", I mean the browser loads and shows a webpage (it's the logon page for the customers' firewall admin interface: so there's the firewall brand's logo and there are fields to enter a user id and a password). Update 13/09/2012: tried again to connect to the customer's network through our Internet connection without a firewall. This is what I did: Run a Kubuntu 12.04 live distro on a spare laptop; Updated all the packages I could and installed WireShark; Attached it to my LAN and verified that I couldn't open https://<customer's IP>. Verified that the Wireshark trace for this attempt was the same as the one I've already posted; Verified that I could connect to another customer's host using rdesktop (it worked); Tried to rdesktop to <customer's IP>, here's the output: kubuntu@kubuntu:/etc$ rdesktop <customer's IP> Autoselected keyboard map en-us ERROR: recv: Connection reset by peer Disconnected the laptop from the LAN; Disconnected the firewall from the Extranet connection, connected the laptop instead. Set its network configuration so that I could access the Internet; Verified that I could connect to other websites in http and https and in RDP to other customers' hosts - it all worked as expected; Verified that I could still traceroute to <customer's IP>: I could; Verified that I still couldn't open https://<customer's IP> (same exact result as before); Checked the WireShark trace for this attempt and noticed a different behaviour: I could see packets going out to the customer's IP, but no replies at all; Tried to run rdesktop again, with a slightly different result: kubuntu@kubuntu:/etc/network$ rdesktop <customer's IP> Autoselected keyboard map en-us ERROR: <customer's IP>: unable to connect Finally gave up, put everything back as it was before, turned off the laptop and lost the WireShark traces I had saved. :( I still remember them very well though. :) Can you get anything out of it? Thank you very much. Update 12/09/2012 n.2: I followed the suggestion by MadHatter in the comments. From inside the firewall, this is what I get: user@ubuntu-mantis:~$ openssl s_client -connect <customer's IP>:443 CONNECTED(00000003) If I now type GET / the output pauses for several seconds and then I get: write:errno=104 I'm going to try the same, but bypassing the firewall, as soon as I can. Thanks. Update 12/09/2012 n.3: So, I think ISA Server is altering the results of my tests... I tried installing Wireshark directly on the firewall and monitoring the packets on the Extranet network card. When the destination is the customer's IP, whatever service I try to connect to (HTTPS, RDP or SAProuter), I can only see outbound packets and no response packets whatsoever from their side. It looks like ISA Server is "faking" the remote server's replies, that's why I get a connection using telnet or the openSSL client. This is the wireshark trace from inside our LAN: But this is the trace on the Extranet network card: This makes a bit more sense... I'll send this info to the customer's tech and see if he can make anything out of it. Thanks to all that took the time to read my question and post suggestions. I'll update this post again.

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71  | Next Page >