Search Results

Search found 2027 results on 82 pages for 'anon guy'.

Page 75/82 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Asus M4A79XTD-EVO / AMD Phenom II X4 965 Crashes / BSOD / Hangs / Restarts

    - by Tiby
    I'll try to be as concise as possible because I have a lot to say about my problem, but I'd rather say it when asked or when I feel it's necessary, just to make this initial reading clearer. For about a year and a half I have periods when my system has all the problems in the title (I'll use the word 'crash' for either one). I'll list some patterns and what I tried to do and what were the results, but the list is not exclusive: usually it crashes when a CPU-intensive operation is in progress, like a game or video encoding or HD movie rendering, but also sometimes crashes when I'm doing nothing after a first crash the system is very unstable and sometimes it crashes even during POST, or doesn't boot at all Some months ago I went to a local service (one that you just put your computer on the table and sit there with a guy and trying to figure out the problem, very rare these days) and they used OCCT and it crashed every time he changed some part to test it out (PSU, RAM, video card, HDD). The last one was the CPU. They changed the CPU and it didn't crash any more. Then when they put my CPU back, it also didn't crash. We figured that the trouble was the thermal paste (probably some 2 years old) because it was the only thing changed while testing. Up until 2 weeks ago, I haven't had any more problems. 2 weeks ago the problems reappeared. I changed again the thermal paste, put some Arctic Silver 5, and for about a week everything worked perfect (tried some games, video encoding, no more crashes). But again it started crashing in the same fashion as the first time. But now, instead, I figured out a very odd behaviour: when I start some of the apps above, in most cases it crashes if I start OCCT and turn on the CPU test, and run any of the programs above, it doesn't crash, even if the CPU is on 100% load (and 65-70 degrees Celsius temperature) if I shut down OCCT and continue using the programs, it crashes in a very short period of time (even if the CPU is on 5-10% load and 40 degrees) There are so many patterns and temporary solutions that I figured out in this year and a half period of time, that I can't include them all because I don't know which one are more relevant, but I'll happily provide any details you ask. My system is: CPU: AMD Phenom II X4 965 (3400 MHz - 125W) MB: ASUS M4A79XTD - EVO RAM: Corsair Vengeance 8 GB (2 x 4GB) CL8 1600MHz Video: HIS Radeon HD5770 1GB PSU: Corsair 750W HDD: Western Digital 1TB OS: Win 7 Enterprise 64 BIT (also tried with Windows Server 2008 R2 Trial and Win XP)

    Read the article

  • What's the best scenario for using a wireless router with Comcast Business Class

    - by Buck
    Just had Comcast Business Class internet installed (usage details at bottom of post). During the call to order I asked about the hardware they'd be providing and was told it was a docsis 3 modem that I'd have to pay $7.00/month for. Figuring I'd have to buy a router anyway, I decided to get my own modem - a Surfboard SB6121 Docsis 3. I called in to tech support to ask some questions and learned that the modem they would have provided DID have a router built in. It's an SMCD3G-CCR. It's not wireless (we need wireless). The guy explained that it was better to have their hardware here because if there's a problem with our service and we're using our own hardware, chances are they'll blame it on our hardware and do nothing since they don't support it. He explained that I could still hang my own wireless router off their modem/router and if we ever had any service problems, we'd be able to plug directly into their hardware and they'd be able to tell where the problem is and they wouldn't be able to pawn it off onto "customer provided equipment". That all said, a few questions: 1. Am I better off returning my Surfboard modem and getting the Comcast one? If I get a wireless router and plug into one of the ethernet ports of the Comcast device, should I NOT plug anything else into the Comcast device since it would be a different network from anything connecting via the wireless router? Is that correct? Given that I know VERY LITTLE about networking and setting up hardware like this... since I need wireless and will HAVE to get a wireless router to work with this Comcast device, do I need to do anything with the settings of the Comcast device? Do I use security on the Comcast device or the wireless router or both? Any suggestions or anything I need to think about, given this scenario, in order to use a business-type voip service like RingCentral or Jive or Nextiva? Any recommendations on a wireless router for this scenario? We are running 2 PCs (possibly 3-4 in the future) - could be wired for the time being if needed but would prefer wireless; would like to have a networked hard drive and a networked printer; NEED business-type VOIP service asap for 2 phone lines. Would like to hook up some IP cameras at some point (but not the kind that require static IPs since I don't have one nor do I plan to pay Comcast another $15/month for one). I don't have or plan to have any type of web servers or anything like that. Want to use WPA or WPA2 security and take advantage of the NAT feature of the router for additional protection (that's the extent of my networking knowledge).

    Read the article

  • Can't validate mine, sudo nor root in Debian "Jessie" Gnome anymore?

    - by Janar
    I'm Debian beginner & GUI guy in a bit of trouble? Can't login as sudo/gksu/root/su nor as (main/super)user after removed user password via Gnome-user-settings. History of actions (Probably irrelevant though) Installed Debian "Jessie" GNU/Linux with xFce GUI (en-US) as only OS. HardWare is ThinkPad w510. Skipped root user password in setup, to get sudo for superuser easily. Logged in (as always had) with Gnome (3.4.x), not once with xFCE. (installed Xfce. Installed xFce only to achieve more control (easier management) over packages this way, to set-up gnome much more by mine likes. Added more jessie repros (same ones as in Wheesy stable by default but for Jessie as, Jessie only had repros for security updates by default). Installed lots of gtk(3) & gnome(3) based soft; (- restarted again after this) Installed propietary graphics card driver for mine nvidia quadro. (- restarted once again after that one) Installed more stuff related to mine work/school/devel. The actual problem Had a plan to restart again, but wanted to set up auto-login first, instead set user password to none (don't ask why / perhaps caused by being awake for a looooong time), noticed it, and set also to auto-login, but couldn't undo mine previous mistake to create new password for me. As mine password is set to none I would have expected that simply return in pass prompt for emty password field would do, but it won't authenticate. I tried Alt+F2 "gksu gedit" as well as: sudo wget "https://www.some-page.eu/file.ext" and "su" in terminals, none has applied (quite logical actually - as I'm sudoer and highest ranked super user, besides only user in computer). Current stand Everything worked & still works nice after this accident, besides this password prompts part. To spoked to log-out nor restart. Synaptic package-manager is still open with root rights (only one, that has left open prior to the issue and not closed since, just in case). Goggled for help and read some manuals/faqs/how-tos - mostly lead to sudoers file management, but not found one specifically for mine issue - so still not any smarter. Really hope, that I don't have to redo OS inst all over again, by just one stupid mistake. Thanks for your reply :-)

    Read the article

  • Set up Gmail with Google apps for own domain

    - by erdomester
    I rent a server from a German company. I have remote access to it as well as WHM and CPanel. I decided to use Google's mail servers for obvious reasons. I am not an admin just an average guy trying to set up what needs to be set up. The problem is I am unable to make the necessary settings. I watched Youtube tutorials, followed written ones as well as Google's help, but there is (at least) one serious problem with my domain settings. The domain console alwasy says Your MX records are incorrect When I check dappwall.com in mxtoolbox.com it says Pref Hostname IP Address TTL 10 mail.dappwall.com 46.4.88.247 24 hrs But this is not the host name. I checked WHM and my hostname is server1.dappwall.com. I can confirm it by typing the hostname command in putty. However, if I do an mx lookup at mxtoolbox.com on server1.dappwall.com or mail.dappwall.com I get Lookup failed after 1 name servers timed out or responded non-authoritatively I ran checks on the google apps toolbox on dappwall.com and two problems emerged: 1.No Google mail exchangers found. Relayhost configuration? 10 mail.dappwall.com In Google Apps > Settings for Gmail > Advanced settings it also says that my current MX records for dappwall.com is Priority Points to 10 MAIL.DAPPWALL.COM. So mail.dappwall.com again. I also have access to a robot provided by the company I rent the server from. Here I see this mail at two places but how should I (if it's necessary) modify this? I set Email routing to Automatically Detect Configuration. 2.There SHOULD be a valid SPF record. "v=spf1 include:_spf.google.com ~all" In the DNS Zone Editor I added this spf record: Name TTL Class Type Record dappwall.com. 1440 IN TXT v=spf1 include:_spf.google.com ~all In the cPanel Email Authentication page it says SPF: Status: Enabled Warning: cPanel is unable to verify that this server is an authoritative nameserver for dappwall.com. [?] Your current raw SPF record is : v=spf1 include:_spf.google.com ~all How can I confirm that my server is an authoritative nameserver for dappwall.com? In WHM Service Configuration Mailserver selection Dovecot was set but I disabled it (i don't know if that's ok). What am I missing here? Where is that mail.dappwall.com coming from?

    Read the article

  • Create and manage child name servers (glue records) within my domain?

    - by basilmir
    Preface I use a top level domain provider that only allows me to add "normal" third-party name servers (a list where i can add "ns1.hostingcompany.com" type entries... nothing else) AND "child name servers" which i can later attach to my parent account ( ns1.myowndomain.com and an ip address). They do not provide other means of linking up. I want to host my own server and dns, even with just one name server (at first). My setup: Airport Extreme - get's a static ip address from my ISP Mac Mini Server - sits behind the Airport and get's a 10.0.1.2 My problem is that i can't seem to configure DNS correctly. I added a "child nameserver" with my airport's external static ip address at the top level provider, so to my understanding i should have all DNS traffic redirected to my Airport. I've opened port 53 UDP to let the traffic in. Now, what i don't get is this. My Mini Server is sitting on a 10.0.1.2 address and i have setup dns correctly, with an A record to point and resolve my server AND a reverse lookup to that 10.0.1.2. So it's ok for "internal stuff". Here is the clicker... How, when a request comes from the exterior for a reverse lookup, does the server "know" ... well look i have everything in 10.0.1.2 but the guy outside needs something from my real address. I can't begin to describe the MX record bonanza... How do i set this "right"? Do i "need" my Mini Server to sit on the external address directly (i can see how this could be the preferred solution, being close to a "real" server i have in my mind). If not... do i need a PTR record on the 10.0.1.2 server but with the external address in there? My dream: I will extend this "setup" with multiple Mini's in different cities where i work. I want a distributed something (Xgrid comes to mind). PS. Be gentle, i've read 2 books and the subject, and bought both the Lynda Essentials and DNS and Networking to boot, still i'm far from being on top of things.

    Read the article

  • Cannot access boot menu with compaq 8510p

    - by pinouchon
    I have a problem with my HP compaq 8510p laptop: when I start it, the fan starts and the power light is on, but the screen displays nothing. When I insert a bootable hard drive, it activates the hard drive light (meaning that the CD is recognized) but it stops after a few seconds. Same thing with any hard drive: the drive is recognized but does not boot. What I've tried so far: Changing the hard drive or booting with no hard drive (same problem) Plugging anoher display via VGA : no display on the other screen Inserting a windows-7 CD (same problem) Booting only on battery, with battery and power cable, only with power cable (same problem) So it looks like something is preventing the laptop from booting and displaying the boot menu. Do you have experienced something similar with a laptop ? What could be wrong ? The laptop is out of warranty. The system used to be windows-7 x64. Edit: I went to the help desk of my university. A guy took a look (he also tried to plug an external screen) and said that the computer is dead: on the HP laptops eventually the GPU card dies and so does the motherboard because they are linked. He saw this many times, and even if I can fix the problem, the laptop would crash again after a while. Do you have similar experience with HP laptops ? (mine is 4 years old) Edit 2: Believe it or not, my laptop is magically working again. I have no clue about what is going on. Now it is like starting and old car: when you turn it on you secretly hope it will actually start... With that said, I expect my laptop to break again in the near future (its an HP after all) and I will accept an answer or add my own accordingly. Edit 3: As expected, the laptop is down again. This time, sometimes when I power it up, it shuts down automatically after 3 seconds, sometimes not at all. In addition, when it does not shut down on its own, the power button does not work : the only way to shut it down is by unplugging the battery. As before, the screen is black, and only the power and battery lights are on. (the other ones: hard drive and wifi are off). I have tried to plug in another power plug, removing the battery and removing the hard drive without success. I might buy another laptop. I've brought the laptop to a repair shop. The problem is indeed that the graphic card is down. It will be replaced by a new one.

    Read the article

  • How to install RAID drivers on already installed Windows 7?

    - by happysencha
    64-bit Windows 7 Ultimate 6GB RAM Intel i7 920 Intel X25-M SSD 80GB 2,5" Club 3D Radeon HD5750 GA-EX58-UD4P Motherboard I've been running fine with Windows 7 installed on the SSD. I wanted to create an mirrored Raid-1 setup for backups using two hard disks, so I ordered two Samsung HD203WI. This motherboard supports two different RAID controllers, the Intel's ICH10R and Gigabyte's SATA2 SATA controller. There are 6 SATA ports behind the ICH10R and 2 SATA ports for the Gigabyte controller. I googled around and seemed that the ICH10R is a better choice and since then I've been trying to make it work. When I activate the [RAID] mode from BIOS, the Windows 7 gives BSOD exactly as described by this guy: "Windows 7 will start to boot, it gets to the screen where there are 4 colors coming together and it blue screens and restarts no matter what I do." First thing I did: turned off the RAID and booted to Windows and tried to install the SATA RAID drivers from Gigabyte. I launch the driver installation program and it gives "This computer does not meet the minimum requirements for installing the software" error. I then tried Intel's Rapid Storage Technology drivers (which apparently is the same as the one offered at Gigabyte's site), but it resulted in exactly the same error. I then detached the new Samsung hard disks from the SATA ports, but left the [RAID] enabled in BIOS. To my surprise, it still BSOD'd, so at this point I knew it is an OS/driver issue. Also, I tried with the Gigabyte's RAID enabled (while the ICH10R RAID disabled) and it booted just fine. So then I thought, that maybe I can't install the RAID drivers from within the OS. So I caused the BSOD on purpose once again, and then with ICH10R RAID activated and Samsung hard disks attached, I choose the Windows 7 Recovery mode in the boot menu. It sees some problem(s), tries to repair, does not succeed and does not ask for drivers (which I put on a USB stick) to install. I also tried to use the command-line in the recovery: "rundll32 syssetup, SetupInfObjectInstallAction DefaultInstall 128 iaStor.inf" but it gave "Installation failed." So I'm clueless how should I proceed. Do I really need to re-install Windows 7 and load RAID drivers in the Win7 setup? I don't want to install any OS on the RAID, the Windows 7 is and will be on the SSD. I just want to have a RAID-1 backup using those two hard disks. I mean why would I need to re-install operating system to add RAID setup?

    Read the article

  • Recovering damaged external hard disk by installing internally

    - by nfarshchi
    I had a 1TB Western Digital (My book series) 3.5" USB3. One day, the SATA to USB3 converter board was damaged and has not worked since. I decided to open the cover and use the HDD as an internal HDD. When I attached the HDD to my PC and booted up in Windows, it asked me which type of ????? I want to use "MBR or GBR" (I dont remember the exact question) I chose MBR and Windows gave me a 1TB empty Hard drive. I tried to recover with recover my files and some other recovery programs but no success. Some one told me that you should choosed GBR instead of MBR . How can I do that now? Another guy told me that the SATA to USB3 converter board is coded to save data on HDD and you can not use them internally without losing data, and I should find another SATA to USB3 board (exact same). It is impossible to find because they are not produced any more. Please help me to find a solution to bring back my data. UPDATE I have 1TB WD "Mybook" USB 3. the board that convert sata to usb3 was damaged. so when the HDD was in the box computer did not recognize it. I opened the box and remove HDD to use it internal. after connecting to my PC windows showed me one massage that I had two choice MBR or GPT I choosed MBR one and windows gave me 1TB empty new volume. I tried many recovery software to recover my data but no success. I brought it to one expert recovery company and they told me the converter board (SATA to USB3) make some encryption on data and with out that board you cannot recover any thing. so I bought another empty WD box and put the HDD inside but even after that also there is no file. I tried to recover again in this state but no success. so I have some unanswered question. does this converted boards make any password or encryption? if yes how can I solve it? does using many recovery programs affected my data? any suggestion or solution for bring back my data? I had use recovery programs such as : recover my files , EaseUS data recovery, easy recovery, test disk, Ontrack easy recovery . Note: when I was using test disk it asked me to choose which partition table I want to use. as it was I choose NTFS, does this made any change on data?

    Read the article

  • Need help troubleshooting highly variable ping times

    - by Elliot.Bradshaw
    I'm at work using Citrix (think Remote Desktop) to connect to client sites. With my job I have to write a fair bit of code while I'm connected remotely via Citrix, so the latency of my internet connection is important. If I'm getting ping times above 250ms, then it becomes almost impossible to scroll, click or type with accuracy. Recently my Comcast business internet has been exhibiting highly variable ping times. If I ping google.com, I'll get pings that range from 9ms all the way up to 1300ms. The problem seems to be at its worst during the hours of 1PM to 4:30PM. Outside of those hours and the variance in pings settles down, mostly between 9ms and 50ms. The signal to noise ratio and upstream power are both fine on my modem--the values are here: http://pastebin.com/D4hWGPXf I ran a trace route from my computer to google.com (the results of which are here: http://pastebin.com/GcdjYvMh) and did another test ping to the IP of the first hop outside of our local network (73.98.44.1)--the variance in ping times existed in exactly the same manner as if I were pinging Google. Connecting directly to the cable modem by CAT5 makes no difference. Here is a screenshot demonstrating the variance of the ping times: http://postimage.org/image/haocdeauv/full/ -- as you can see it can get pretty bad. Three Comcast techs have been out (two of them were here when the problem wasn't happening) and they as well as the regional tier 2 Comcast support were unable to diagnose the problem. I now have a ticket open with tier 3 support, but have yet to hear back from them. Does anyone know what could cause these sorts of problems or have any idea from the traceroute above where it could be originating? The regional tier 2 guy tried to tell me that what I'm seeing is normal--are highly variable ping times like that ever acceptable? Anything I should ask Comcast to do or look at to get this problem fixed? Any tips/advice much appreciated! Edit: This is Comcast cable internet at a small start-up, we've ruled out congestion in our private LAN as a cause (i.e., no one's watching YouTube when the pings become variable). Update: Tier 3 Comcast support advised swapping out the modem, a tech came here today and did that--same problem persists.

    Read the article

  • ORA-06502: PL/SQL: numeric or value error: character string buffer too small with Oracle aggregate f

    - by Tunde
    Good day gurus, I have a script that populates tables on a regular basis that crashed and gave the above error. The strange thing is that it has been running for close to 3 months on the production system with no problems and suddenly crashed last week. There has not been any changes on the tables as far as I know. Has anyone encountered something like this before? I believe it has something to do with the aggregate functions I'm implementing in it; but it worked initially. please; kindly find attached the part of the script I've developed into a procedure that I reckon gives the error. CREATE OR REPLACE PROCEDURE V1 IS --DECLARE v_a VARCHAR2(4000); v_b VARCHAR2(4000); v_c VARCHAR2(4000); v_d VARCHAR2(4000); v_e VARCHAR2(4000); v_f VARCHAR2(4000); v_g VARCHAR2(4000); v_h VARCHAR2(4000); v_i VARCHAR2(4000); v_j VARCHAR2(4000); v_k VARCHAR2(4000); v_l VARCHAR2(4000); v_m VARCHAR2(4000); v_n NUMBER(10); v_o VARCHAR2(4000); -- -- Procedure that populates DEMO table BEGIN -- Delete all from the DEMO table DELETE FROM DEMO; -- Populate fields in DEMO from DEMOV1 INSERT INTO DEMO(ID, D_ID, CTR_ID, C_ID, DT_NAM, TP, BYR, ENY, ONG, SUMM, DTW, REV, LD, MD, STAT, CRD) SELECT ID, D_ID, CTR_ID, C_ID, DT_NAM, TP, TO_NUMBER(TO_CHAR(BYR,'YYYY')), TO_NUMBER(TO_CHAR(NVL(ENY,SYSDATE),'YYYY')), CASE WHEN ENY IS NULL THEN 'Y' ELSE 'N' END, SUMMARY, DTW, REV, LD, MD, '1', SYSDATE FROM DEMOV1; -- LOOP THROUGH DEMO TABLE FOR j IN (SELECT ID, CTR_ID, C_ID FROM DEMO) LOOP Select semic_concat(TXTDESC) INTO v_a From GEOT WHERE ID = j.ID; SELECT COUNT(*) INTO v_n FROM MERP M, PROJ P WHERE M.MID = P.COD AND ID = j.ID AND PROAC IS NULL; IF (v_n > 0) THEN Select semic_concat(PRO) INTO v_b FROM MERP M, PROJ P WHERE M.MID = P.COD AND ID = j.ID; ELSE Select semic_concat(PRO || '(' || PROAC || ')' ) INTO v_b FROM MERP M, PROJ P WHERE M.MID = P.COD AND ID = j.ID; END IF; Select semic_concat(VOCNAME('P02',COD)) INTO v_c From PAR WHERE ID = j.ID; Select semic_concat(VOCNAME('L05',COD)) INTO v_d From INST WHERE ID = j.ID; Select semic_concat(NVL(AUTHOR,'Anon') ||' ('||to_char(PUB,'YYYY')||') '||TITLE||', '||EDT) INTO v_e From REFE WHERE ID = j.ID; Select semic_concat(NAM) INTO v_f FROM EDM E, EDO EO WHERE E.EDMID = EO.EDOID AND ID = j.ID; Select semic_concat(VOCNAME('L08', COD)) INTO v_g FROM AVA WHERE ID = j.ID; SELECT or_concat(NAM) INTO v_o FROM CON WHERE ID = j.ID AND NAM = 'Unknown'; IF (v_o = 'Unknown') THEN Select or_concat(JOBTITLE || ' (' || EMAIL || ')') INTO v_h FROM CON WHERE ID = j.ID; ELSE Select or_concat(NAM || ' (' || EMAIL || ')') INTO v_h FROM CON WHERE ID = j.ID; END IF; Select commaencap_concat(COD) INTO v_i FROM PAR WHERE ID = j.ID; IF (v_i = ',') THEN v_i := null; ELSE Select commaencap_concat(COD) INTO v_i FROM PAR WHERE ID = j.ID; END IF; Select commaencap_concat(COD) INTO v_j FROM INST WHERE ID = j.ID; IF (v_j = ',') THEN v_j := null; ELSE Select commaencap_concat(COD) INTO v_j FROM INST WHERE ID = j.ID; END IF; Select commaencap_concat(COD) INTO v_k FROM SAR WHERE ID = j.ID; IF (v_k = ',') THEN v_k := null; ELSE Select commaencap_concat(COD) INTO v_k FROM SAR WHERE ID = j.ID; END IF; Select commaencap_concat(CONID) INTO v_l FROM CON WHERE ID = j.ID; IF (v_l = ',') THEN v_l := null; ELSE Select commaencap_concat(CONID) INTO v_l FROM CON WHERE ID = j.ID; END IF; Select commaencap_concat(PROID) INTO v_m FROM PRO WHERE ID = j.ID; IF (v_m = ',') THEN v_m := null; ELSE Select commaencap_concat(PROID) INTO v_m FROM PRO WHERE ID = j.ID; END IF; -- UPDATE DEMO TABLE UPDATE DEMO SET GEOC = v_a, PRO = v_b, PAR = v_c, INS = v_d, REFER = v_e, ORGR = v_f, AVAY = v_g, CON = v_h, DTH = v_i, INST = v_j, SA = v_k, CC = v_l, EDPR = v_m, CTR = (SELECT NAM FROM EDM WHERE EDMID = j.CTR_ID), COLL = (SELECT NAM FROM EDM WHERE EDMID = j.C_ID) WHERE ID = j.ID; END LOOP; END V1; / The aggregate functions, commaencap_concat (encapsulates with a comma), or_concat (concats with an or) and semic_concat(concats with a semi-colon). the remaining tables used are all linked to the main table DEMO. I have checked the column sizes and there seems to be no problem. I tried executing the SELECT statements alone and they give the same error without populating the tables. Any clues? Many thanks for your anticipated support.

    Read the article

  • Users being forced to re-login randomly, before session and auth ticket timeout values are reached

    - by Don
    I'm having reports and complaints from my user that they will be using a screen and get kicked back to the login screen immediately on their next request. It doesn't happen all the time but randomly. After looking at the Web server the error that shows up in the application event log is: Event code: 4005 Event message: Forms authentication failed for the request. Reason: The ticket supplied has expired. Everything that I read starts out with people asking about web gardens or load balancing. We are not using either of those. We're a single Windows 2003 (32-bit OS, 64-bit hardware) Server with IIS6. This is the only website on this server too. This behavior does not generate any application exceptions or visible issues to the user. They just get booted back to the login screen and are forced to login. As you can imagine this is extremely annoying and counter-productive for our users. Here's what I have set in my web.config for the application in the root: <authentication mode="Forms"> <forms name=".TcaNet" protection="All" timeout="40" loginUrl="~/Login.aspx" defaultUrl="~/MyHome.aspx" path="/" slidingExpiration="true" requireSSL="false" /> </authentication> I have also read that if you have some locations setup that no longer exist or are bogus you could have issues. My path attributes are all valid directories so that shouldn't be the problem: <location path="js"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <location path="images"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <location path="anon"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <location path="App_Themes"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <location path="NonSSL"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> The only thing I'm not clear on is if my timeout value in the forms property for the auth ticket has to be the same as my session timeout value (defined in the app's configuration in IIS). I've read some things that say you should have the authentication timeout shorter (40) than the session timeout (45) to avoid possible complications. Either way we have users that get kicked to the login screen a minute or two after their last action. So the session definitely should not be expiring. Update 2/23/09: I've since set the session timeout and authentication ticket timeout values to both be 45 and the problem still seems to be happening. The only other web.config in the application is in 1 virtual directory that hosts Community Server. That web.config's authentication settings are as follows: <authentication mode="Forms"> <forms name=".TcaNet" protection="All" timeout="40" loginUrl="~/Login.aspx" defaultUrl="~/MyHome.aspx" path="/" slidingExpiration="true" requireSSL="true" /> </authentication> And while I don't believe it applies unless you're in a web garden, I have both of the machine key values set in both web.config files to be the same (removed for convenience): <machineKey validationKey="<MYVALIDATIONKEYHERE>" decryptionKey="<MYDECRYPTIONKEYHERE>" validation="SHA1" /> <machineKey validationKey="<MYVALIDATIONKEYHERE>" decryptionKey="<MYDECRYPTIONKEYHERE>" validation="SHA1"/> Any help with this would be greatly appreciated. This seems to be one of those problems that yields a ton of Google results, none of which seem to be fitting into my situation so far.

    Read the article

  • Configuring Novel iPrint client on ubuntu 13.10

    - by Mahdi Sadeghi
    Recently I have struggled a lot to make Novel iPrint client to work on my laptop. I need it to use Follow Me printers in our university(you can take your print form any printer). Using this tutorial from Novel, I tried to convert the rpm package and install it on Ubuntu 13.04 & 13.10. The post install script from installing generated deb package had a typo which I saw in post install messages and I fixed that. Now I have the client running. To see the client UI I installed cinnamon desktop(because unity does not have system tray and old solutions did'nt work to whitelist Novel clinet). I have iPrint plugin installed on firefox as well(I copied the shared object files to plugin directories). I try installing printers from provided ipp URL(which lists available printers on the server) with no success. After clicking the printer name I see this: I have various errors: Formerly firefox used to asked my network username/password for installing SSL printer but now it returns this: iPrint Printer - The printer is currently not available. However I can install non-SSL version but the printer location is either empty or points to: file:///dev/null even if I change it to the exact address which I see on working machines still it prints nothing. I have tried the novel command line tool, iprntcmd to print. It is being installed at: /opt/novell/iprint/bin/ msadeghi@werkstatt:/opt/novell/iprint/bin$ ./iprntcmd --addprinter ipp://iprint.rz.hs-offenburg.de/ipp/Follow-me\ -\ IPP iprntcmd v05.04.00 Adding printer ipp://iprint.rz.hs-offenburg.de/ipp/Follow-me - IPP. Added printer ipp://iprint.rz.hs-offenburg.de/ipp/Follow-me - IPP successfully. It adds the printer with empty location and again no print. What I found interesting is the log file at ~/.iprint/errors.txt with strange errors which I hope somebody here can understand. When I try to install the SSL printer I receive these logs(note that HP is my local printer and has nothing to do with iprint): Thu Oct 31 11:02:03 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:03 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified Thu Oct 31 11:02:05 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:05 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified Thu Oct 31 11:02:06 2013 Trace Info: mydoreq.c, line 676 Group Info: CLIB Error Code: 0 (0x0) User ID: 1000 Error Msg: Success Debug Msg: MyCupsDoFileRequest - httpReconnect failed (0) Thu Oct 31 11:02:06 2013 Trace Info: mydoreq.c, line 1293 Group Info: CUPS-IPP Error Code: 1282 (0x502) User ID: 1000 Error Msg: iPrint Printer - The printer is currently not available. Debug Msg: MyCupsDoFileRequest - IPP SERVICE UNAVAILABLE Thu Oct 31 11:02:06 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:06 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified Thu Oct 31 11:02:08 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:08 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified I should say that my friend can print using the same instructions on CrunchBang easily and another guy on 12.04 LTS but with more struggling. It worked for me on linux mint maya with my old laptop as well. Is there anybody out there who can help me to solve these problems? I am really disappointed with Novell and our university support. PS. I had the same problemwith 13.04. No matter if I am within the network or I connect with VPN, I have the same issues.

    Read the article

  • Developer’s Life – Summary of Superhero Articles

    - by Pinal Dave
    Earlier this year, I wrote an article series where I talked about developer’s life and compared it with Superhero. I have got amazing response to this series and I have been receiving quite a lots of email suggesting that I should write more blog post about them. Currently I am not planning to write more blog post but I will soon continue another series. In this blog post, I have summarized the entire series. Let me know if you want me to write about any superhero. I will see what I can do about that hero. Developer’s Life – Every Developer is a Captain America Captain America was first created as a comic book character in the 1940’s as a way to boost morale during World War II.  Aimed at a children’s audience, his legacy faded away when the war ended.  However, he has recently has a major reboot to become a popular movie character that deals with modern issues. Developer’s Life – Every Developer is the Incredible Hulk The Incredible Hulk is possibly one of the scariest superheroes out there.  All superheroes are meant to be “out of this world” and awe-inspiring, but I think most people will agree with I say The Hulk takes this to the next level.  He is the result of an industrial accident, which is scary enough in it’s own right.  Plus, when mild-mannered Bruce Banner is angered, he goes completely out-of-control and transforms into a destructive monster that he cannot control and has no memories of. Developer’s Life – Every Developer is a Wonder Woman We have focused a lot lately on this “superhero series.”  I love fantasy books and movies, and I feel like there is a lot to be learned from them.  As I am writing this series, though, I have noticed that every super hero I write about is a man.  So today, I would like to talk about the major female super hero – Wonder Woman. Developer’s Life – Every Developer is a Harry Potter Harry Potter might not be a superhero in the traditional sense, but I believe he still has a lot to teach us and show us about life as a developer.  If you have been living under a rock for the last 17 years, you might not know that Harry Potter is the main character in an extremely popular series of books and movies documenting the education and tribulation of a young wizard (and his friends). Developer’s Life – Every Developer is Like Transformers Transformers may not be superheroes – they don’t wear capes, they don’t have amazing powers outside of their size and folding ability, they’re not even human (technically).  Part of their enduring popularity is that while we are enjoying over-the-top movies, we are learning about good leadership and strong personal skills. Developer’s Life – Every Developer is a Iron Man Iron Man is another superhero who is not naturally “super,” but relies on his brain (and money) to turn him into a fighting machine.  While traditional superheroes are still popular, a three-movie franchise and incorporation into the new Avengers series shows that Iron Man is popular enough on his own. Developer’s Life – Every Developer is a Sherlock Holmes I have been thinking a lot about how developers are like super heroes, and I have written two blog posts now comparing them to Spiderman and Superman.  I have a lot of love and respect for developers, and I hope that they are enjoying these articles, and others are learning a little bit about the profession.  There is another fictional character who, while not technically asuper hero, is very powerful, and I also think stands as a good example of a developer. That character is Sherlock Holmes.  Sherlock Holmes is a British detective, first made popular at the turn of the 19thcentury by author Sir Arthur Conan Doyle.  The original Sherlock Holmes was a brilliant detective who could solve the most mind-boggling crime through simple observations and deduction. Developer’s Life – Every Developer is a Chhota Bheem Chhota Bheem is a cartoon character that is extremely popular where I live.  He is my daughter’s favorite characters.  I like to say that children love Chhota Bheem more than their parents – it is lucky for us he is not real!  Children love Chhota Bheem because he is the absolute “good guy.”  He is smart, loyal, and strong.  He and his friends live in Dholakpur and fight off their many enemies – and always win – in every episode.  In each episode, they learn something about friendship, bravery, and being kind to others.  Chhota Bheem is a good role model for children, and I think that he is a good role model for developers are well. Developer’s Life – Every Developer is a Batman Batman is one of the darkest superheroes in the fantasy canon.  He does not come to his powers through any sort of magical coincidence or radioactive insect, but through a lot of psychological scarring caused by witnessing the death of his parents.  Despite his dark back story, he possesses a lot of admirable abilities that I feel bear comparison to developers. Developer’s Life – Every Developer is a Superman I enjoyed comparing developers to Spiderman so much, that I have decided to continue the trend and encourage some of my favorite people (developers) with another favorite superhero – Superman.  Superman is probably the most famous superhero – and one of the most inspiring. Developer’s Life – Every Developer is a Spiderman I have to admit, Spiderman is my favorite superhero.  The most recent movie recently was released in theaters, so it has been at the front of my mind for some time. Spiderman was my favorite superhero even before the latest movie came out, but of course I took my whole family to see the movie as soon as I could!  Every one of us loved it, including my daughter.  We all left the movie thinking how great it would be to be Spiderman.  So, with that in mind, I started thinking about how we are like Spiderman in our everyday lives, especially developers. I would like to know which Superhero is your favorite hero! Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Developer, Superhero

    Read the article

  • Tykie

    - by Brian
    Here’s the obituary my mother wrote for Tykie, I still miss the little guy quite a bit. Anyone who’s interested in further information on hearing dogs should check out the IHDI website. I cannot begin to express how helpful a hearing dog can be for the hearing impaired. If you feel so inclined, please make a donation. In Memoriam, Tykie 1993-2010 The American Legion Post 401, South Wichita, KS, supported one of its members and commander by sponsoring a service dog for him. Unlike most service dogs this one was for the hearing impaired. Both Ocie and Betty Sims had hearing loss – Ocie more than Betty. The Post and Auxilliary had garage sales, auctions and other fund-raising endeavors to get donations for the dog. Betty made Teddy bears with growlers that were auctioned for donations to bring a hearing dog from International Hearing Dog, Henderson, Colorado. Tykie, a small wiry, salt and pepper terrier, arrived September 1, 1994 to begin his work that included attending Post 401 meetings and celebrations as well as raising more money to be donated to IHD to help others have hearing dogs. Tykie was a young dog less than a year old when he came to Wichita. He was always anxious to please and seldom barked, though he did put out a kind of cry when he was giving his urgent announcement that someone was at the door or the telephone was ringing. He also enjoyed chasing squirrels in the backyard garden that Ocie prized. In 1995, Betty almost died of a lung infection. Tykie was at the hospital with Ocie when he could visit. Several weeks after she was able to come home after a miraculous recovery, Tykie and Ocie went to a car show in downtown Wichita. Ocie’s retina tore loose in the only eye he could see out of and he almost blind was in great pain. How Ocie and Tykie got home is still a mystery, but the family legend goes that Tykie added seeing eye dog to his repertoire and helped drive him home. Health problems continued for Ocie and when he was placed in a nursing home, Tykie was moved to be Betty’s hearing dog. No problem for Tykie, he still saw his friends at the post and continued to help with visitors at the door. The night of May 3, 1999, Betty and Tykie were in the bedroom watching TV when Tykie began hitting her with both front paws as he would if something were urgent. She said later she thought he wanted to go out. As she and the dog walked down the hall towards the back of the house, Tykie hit her again with his front paws with such urgency that she fell into a small coat closet. That small 2-by-2 closet became their refuge as that very second the roof of her house went off as the f4 tornado raced through the city. Betty acquired one small wound on her hand from a piece of flying glass as she pulled Tykie into the closet with her. Tykie was a hero that day and a lot of days after. He kept Betty going as she rebuilt her home and after her husband died April 15, 2000. Tykie had to be cared for so she had to take him outside and bring him inside. He attended weddings of grandchildren and funerals of Post friends. When Betty died February 17, 2002 Tykie’s life changed again. IHD gave approval for his transfer and retirement to Betty and Ocie’s grandson, Brian Laird, who has a similar hearing loss to his grandfather. A few days after the funeral Tykie flew to his new home in Rutherford, NJ where he was able to take long walks for a couple of years before moving back to the Kansas City area. He was still full of adventure. He was written up in a book about service dogs and his story of the tornado and his picture appeared. He spent weekends at Brian’s mother’s farm to get muddy and be afraid of cats and chickens. He also took on an odyssey as he slipped from his fenced yard in Lenexa one day and walked more than seven miles in Overland Park traffic before being found by a good Samaritan who called IHD to find out where he belonged. Tykie was deaf for about the last two years of his long life and became blind as well, but he continued to strive to please. Tykie was 16 years and 4 months when he was cremated. His ashes were scattered on the graves of Betty and Ocie Sims at Greenwood Cemetery west of Wichita on the afternoon of March 21, 2010, with about a dozen family and Post 401 members. It is still the rule. Service dogs are the only dogs allowed inside the Post home. Submitted by Linda Laird, daughter of Betty and Ocie and mother of Brian Laird.

    Read the article

  • Your thoughts on Best Practices for Scientific Computing?

    - by John Smith
    A recent paper by Wilson et al (2014) pointed out 24 Best Practices for scientific programming. It's worth to have a look. I would like to hear opinions about these points from experienced programmers in scientific data analysis. Do you think these advices are helpful and practical? Or are they good only in an ideal world? Wilson G, Aruliah DA, Brown CT, Chue Hong NP, Davis M, Guy RT, Haddock SHD, Huff KD, Mitchell IM, Plumbley MD, Waugh B, White EP, Wilson P (2014) Best Practices for Scientific Computing. PLoS Biol 12:e1001745. http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001745 Box 1. Summary of Best Practices Write programs for people, not computers. (a) A program should not require its readers to hold more than a handful of facts in memory at once. (b) Make names consistent, distinctive, and meaningful. (c) Make code style and formatting consistent. Let the computer do the work. (a) Make the computer repeat tasks. (b) Save recent commands in a file for re-use. (c) Use a build tool to automate workflows. Make incremental changes. (a) Work in small steps with frequent feedback and course correction. (b) Use a version control system. (c) Put everything that has been created manually in version control. Don’t repeat yourself (or others). (a) Every piece of data must have a single authoritative representation in the system. (b) Modularize code rather than copying and pasting. (c) Re-use code instead of rewriting it. Plan for mistakes. (a) Add assertions to programs to check their operation. (b) Use an off-the-shelf unit testing library. (c) Turn bugs into test cases. (d) Use a symbolic debugger. Optimize software only after it works correctly. (a) Use a profiler to identify bottlenecks. (b) Write code in the highest-level language possible. Document design and purpose, not mechanics. (a) Document interfaces and reasons, not implementations. (b) Refactor code in preference to explaining how it works. (c) Embed the documentation for a piece of software in that software. Collaborate. (a) Use pre-merge code reviews. (b) Use pair programming when bringing someone new up to speed and when tackling particularly tricky problems. (c) Use an issue tracking tool. I'm relatively new to serious programming for scientific data analysis. When I tried to write code for pilot analyses of some of my data last year, I encountered tremendous amount of bugs both in my code and data. Bugs and errors had been around me all the time, but this time it was somewhat overwhelming. I managed to crunch the numbers at last, but I thought I couldn't put up with this mess any longer. Some actions must be taken. Without a sophisticated guide like the article above, I started to adopt "defensive style" of programming since then. A book titled "The Art of Readable Code" helped me a lot. I deployed meticulous input validations or assertions for every function, renamed a lot of variables and functions for better readability, and extracted many subroutines as reusable functions. Recently, I introduced Git and SourceTree for version control. At the moment, because my co-workers are much more reluctant about these issues, the collaboration practices (8a,b,c) have not been introduced. Actually, as the authors admitted, because all of these practices take some amount of time and effort to introduce, it may be generally hard to persuade your reluctant collaborators to comply them. I think I'm asking your opinions because I still suffer from many bugs despite all my effort on many of these practices. Bug fix may be, or should be, faster than before, but I couldn't really measure the improvement. Moreover, much of my time has been invested on defence, meaning that I haven't actually done much data analysis (offence) these days. Where is the point I should stop at in terms of productivity? I've already deployed: 1a,b,c, 2a, 3a,b,c, 4b,c, 5a,d, 6a,b, 7a,7b I'm about to have a go at: 5b,c Not yet: 2b,c, 4a, 7c, 8a,b,c (I could not really see the advantage of using GNU make (2c) for my purpose. Could anyone tell me how it helps my work with MATLAB?)

    Read the article

  • Top 4 Lame Tech Blogging Posts

    - by jkauffman
    From a consumption point of view, tech blogging is a great resource for one-off articles on niche subjects. If you spend any time reading tech blogs, you may find yourself running into several common, useless types of posts tech bloggers slip into. Some of these lame posts may just be natural due to common nerd psychology, and some others are probably due to lame, lemming-like laziness. I’m sure I’ll do my fair share of fitting the mold, but I quickly get bored when I happen upon posts that hit these patterns without any real purpose or personal touches. 1. The Content Regurgitation Posts This is a common pattern fueled by the starving pan-handlers in the web traffic economy. These are posts that are terse opinions or addendums to an existing post. I commonly see these involve huge block quotes from the linked article which almost always produces over 50% of the post itself. I’ve accidentally gone to these posts when I’m knowingly only interested in the source material. Web links can degrade as well, so if the source link is broken, then, well, I’m pretty steamed. I see this occur with simple opinions on technologies, Stack Overflow solutions, or various tech news like posts from Microsoft. It’s not uncommon to go to the linked article and see the author announce that he “added a blog post” as a response or summary of the topic. This is just rude, but those who do it are probably aware of this. It’s a matter of winning that sweet, juicy web traffic. I doubt this leeching is fooling anybody these days. I would like to rally human dignity and urge people to avoid these types of posts, and just leave a comment on the source material. 2. The “Sorry I Haven’t Posted In A While” Posts This one is far too common. You’ll most likely see this quote somewhere in the body of the offending post: I have been really busy. If the poster is especially guilt-ridden, you’ll see a few volleys of excuses. Here are some common reasons I’ve seen, which I’ll list from least to most painfully awkward. Out of town Vague allusions to personal health problems (these typically includes phrases like “sick”, “treatment'”, and “all better now!”) “Personal issues” (which I usually read as "divorce”) Graphic or specific personal health problems (maximum awkwardness potential is achieved if you see links to charity fund websites) I can’t help but to try over-analyzing why this occurs. Personally, I see this an an amalgamation of three plain factors: Life happens Us nerds are duty-driven, and driven to guilt at personal inefficiencies Tech blogs can become personal journals I don’t think we can do much about the first two, but on the third I think we could certainly contain our urges. I’m a pretty boring guy and, whether or I like it or not, I have an unspoken duty to protect the world from hearing about my unremarkable existence. Nobody cares what kind of sandwich I’m eating. Similarly, if I disappear for a while, it’s unlikely that anybody who happens upon my blog would care why. Rest assured, if I stop posting for a while due to a vasectomy, you will be the first to know. 3. The “At A Conference”, or “Conference Review” Posts I don’t know if I’m like everyone else on this one, but I have never been successfully interested in these posts. It even sounds like a good idea: if I can’t make it to a particular conference (like the KCDC this year), wouldn’t I be interested in a concentrated summary of events? Apparently, no! Within this realm, I’ve never read a post by a blogger that held my interest. What really baffles is is that, for whatever reason, I am genuinely engaged and interested when talking to someone in person regarding the same topic. I have noticed the same phenomenon when hearing about others’ vacations. If someone sends me an email about their vacation, I gloss over it and forget about it quickly. In contrast, if I’m speaking to that individual in person about their vacation, I’m actually interested. I’m unsure why the written medium eradicates the intrigue. I was raised by a roaming pack of friendly wild video games, so that may be a factor. 4. The “Top X Number of Y’s That Z” Posts I’ve seen this one crop up a lot more in the past few of years. Here are some fabricated examples: 5 Easy Ways to Improve Your Code Top 7 Good Habits Programmers Learn From Experience The 8 Things to Consider When Giving Estimates Top 4 Lame Tech Blogging Posts These are attention-grabbing headlines, and I’d assume they rack up hits. In fact, I enjoy a good number of these. But, I’ve been drawn to articles like this just to find an endless list of identically formatted posts on the blog’s archive sidebar. Often times these posts have overlapping topics, too. These types of posts give the impression that the author has given thought to prioritize and organize the points as a result of a comprehensive consideration of a particular topic. Did the author really weigh all the possibilities when identifying the “Top 4 Lame Tech Blogging Patterns”? Unfortunately, probably not. What a tool. To reiterate, I still enjoy the format, but I feel it is abused. Nowadays, I’m pretty skeptical when approaching posts in this format. If these trends continue, my brain will filter these blog posts out just as effectively as it ignores the encroaching “do xxx with this one trick” advertisements. Conclusion To active blog readers, I hope my guide has served you precious time in being able to identify lame blog posts at a glance. Save time and energy by skipping over the chaff of the internet! And if you author a blog, perhaps my insight will help you to avoid the occasional urge to produce these needless filler posts.

    Read the article

  • SQLAuthority News – Meeting SQL Friends – SQLPASS 2011 Event Log

    - by pinaldave
    One of the biggest reason I go to SQLPASS is that my friends are going there too. There are so many friends with whom I often talk on Facebook and Twitter but I rarely get time to meet them as well talk with them. One thing I am usually sure that many fo them will be for sure attend SQLPASS. This is one event which every SQL Server Enthusiast should attend. Just like everybody I had pleasant time to meet many of my SQL friends. There were so many friends that I met and I did not click photo. There were so many friends who clicked photo in their camera and I do not have them. Here are 1% of the photos which I have. If you are not in the photo, it does not mean I have less respect to our friendship. Please post link to our photo together :) I was very fortunate that I was able to snap a quick photograph with Pinal Dave with Dr. David DeWitt. I stood outside of the hall waiting for Dr. to show up and when he was heading down from convention center I requested him if I can have one photo for my memory lane and very politely he agreed to have one. It indeed made my day! Pinal Dave with Dr. David DeWitt Every single time I met Steve, I make sure I have one photo for my memory. Steve is so kind every single time. If you know SQL and do not know Steve Jones, you do not know SQL (IMHO). Following is the photograph with Michael McLean. More details about this photo in future blog post! Pinal Dave, Michael McLean, and Rick Morelan Arnie always shares his wisdom with me. I still remember when I very first time visited USA, I was standing alone in corner and Arnie walked to me and introduced to every single person he know. Talking to Arnie is always pleasure and inspiring. Arnie Rowland and Pinal Dave I am now published author and have written two books so far. I am fortunate to have Rick Morelan as Co-author of both of my books. He is great guy and very easy to become friends with. I am very much impressed by him and his kindness during book co-authoring. Here is very first of our photograph together at SQLPASS. Rick Morelan and Pinal Dave Diego Nogare and I have been talking for long time on twitter and on various social media channels. I finally got chance to meet my friend from Brazil. It was excellent experience to meet a friend whom one wants to meet for long time and had never got chance earlier. Buck Woody – who does not know Buck. He is funny, kind and most important friends of every one. Buck is so kind that he does not hesitate to approach people even though he is famous and most known in community. Every time I meet him I learn something. He is always smiling and approachable. Pinal Dave and Buck Woddy Rushabh Mehta is current SQL PASS president and personal friend. He has always smiling face and tremendous love for SQL community. I often wonder where he gets all the time for all the time and efforts he puts in for community. I never miss a chance to meet and greet him. Even though he is renowned SQL Guru and extremely busy person – every single time I meet him he always asks me – “How is Nupur and Shaivi?” He even remembers my wife and daughters name. I am touched. Rushabh Mehta and Pinal Dave Nigel Sammy has extremely well sense of humor and passion from community. We have excellent synergy while we are together. The attached photo is taken while I was talking to him on Seattle Shoreline about SQL. Pinal Dave and Nigel Sammy Rick Morelan wanted my this trip to be memorable. I am vegetarian and I told him that I do not like Seafood. Well, to prove the point, he took me to fantastic Seafood restaurant in Seattle and treated me with mouth watering vegetarian dishes. I think when I go to Seattle next time, I am going to make him to take me again to the same place. Rick, Rushabh, Pinal and Paras Well, this is a short summary of few of the friends I met at Seattle. What is the life without friends, eh? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL PASS, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • Workaround for datadude deployment bug - NullReferenceException

    - by jamiet
    I have come across a bug in Visual Studio 2010 Database Projects (aka datadude aka DPro aka Visual Studio Database Development Tools aka Visual Studio Team Edition for Database Professionals aka Juneau aka SQL Server Data Tools) that other people may encounter so, for the purposes of googling, I'm writing this blog post about it. Through my own googling I discovered that a Connect bug had already been raised about it (VS2010 Database project deploy - “SqlDeployTask” task failed unexpectedly, NullReferenceException), and coincidentally enough it was raised by my former colleague Tom Hunter (whom I have mentioned here before as the superhuman Tom Hunter) although it has not (at this time) received a reply from Microsoft. Tom provided a repro, namely that this syntactically valid function definition: CREATE FUNCTION [dbo].[Function1]()RETURNS TABLEASRETURN (    WITH cte AS (    SELECT 1 AS [c1]    FROM [$(Database3)].[dbo].[Table1]   )   SELECT 1 AS [c1]   FROM cte) would produce this nasty unhelpful error upon deployment: C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\TeamData\Microsoft.Data.Schema.TSqlTasks.targets(120,5): Error MSB4018: The "SqlDeployTask" task failed unexpectedly.System.NullReferenceException: Object reference not set to an instance of an object.   at Microsoft.Data.Schema.Sql.SchemaModel.SqlModelComparerBase.VariableSubstitution(SqlScriptProperty propertyValue, IDictionary`2 variables, Boolean& isChanged)   at Microsoft.Data.Schema.Sql.SchemaModel.SqlModelComparerBase.ArePropertiesEqual(IModelElement source, IModelElement target, ModelPropertyClass propertyClass, ModelComparerConfiguration configuration)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareProperties(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithoutCompareName(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithSameType(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean ignoreComparingName, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, Boolean compareFromRootElement, ModelComparisonChangeDefinition& changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareChildren(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareParentElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes, Boolean isComposing)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithoutCompareName(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithSameType(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean ignoreComparingName, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, Boolean compareFromRootElement, ModelComparisonChangeDefinition& changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareChildren(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareParentElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes, Boolean isComposing)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithoutCompareName(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithSameType(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean ignoreComparingName, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, Boolean compareFromRootElement, ModelComparisonChangeDefinition& changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareAllElementsForOneType(ModelElementClass type, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean compareOrphanedElements)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareStore(ModelStore source, ModelStore target, ModelComparerConfiguration configuration)   at Microsoft.Data.Schema.Build.SchemaDeployment.CompareModels()   at Microsoft.Data.Schema.Build.SchemaDeployment.PrepareBuildPlan()   at Microsoft.Data.Schema.Build.SchemaDeployment.Execute(Boolean executeDeployment)   at Microsoft.Data.Schema.Build.SchemaDeployment.Execute()   at Microsoft.Data.Schema.Tasks.DBDeployTask.Execute()   at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute()   at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask, Boolean& taskResult)   Done executing task "SqlDeployTask" -- FAILED.  Done building target "DspDeploy" in project "Lloyds.UKTax.DB.UKtax.dbproj" -- FAILED. Done executing task "CallTarget" -- FAILED.Done building target "DBDeploy" in project It turns out there are a certain set of circumstances that need to be met for this error to occur: The object being deployed is an inline function  (may also exist for multistatement and scalar functions - I haven't tested that) That object includes SQLCMD variable references The object has already been deployed successfully Just to reiterate that last bullet point, the error does not occur when you deploy the function for the first time, only on the subsequent deployment.   Luckily I have a direct line into a guy on the development team so I fired off an email on Friday evening and today (Monday) I received a reply back telling me that there is a simple fix, one simply has to remove the parentheses that wrap the SQL statement. So, in the case of Tom's repro, the function definition simpy has to be changed to: CREATE FUNCTION [dbo].[Function1]()RETURNS TABLEASRETURN --(    WITH cte AS (    SELECT 1 AS [c1]    FROM [$(Database3)].[dbo].[Table1]   )   SELECT 1 AS [c1]   FROM cte--) I have commented out the offending parentheses rather than removing them just to emphasize the point. Thereafter the function will deploy fine. I tested this out on my own project this morning and can confirm that this fix does indeed work.   I have been told that the bug CAN be reproduced in the Release Candidate (RC) 0 build of SQL Server Data Tools in SQL Server 2010 so am hoping that a fix makes it in for the Release-To-Manufacturing (RTM) build. Hope this helps @jamiet

    Read the article

  • On Life and TechEd&hellip;

    - by MOSSLover
    I haven’t been writing here I know.  I am very sorry, but I am just too busy trying to make my personal life just as good as my SharePoint life.  So I was at TechEd again helping with the hands on labs, but I had a very long couple of weeks.  I will have a long week again.  I started out going to my friend, Randy Walker’s, wedding and I ended up at my grandmother’s condo in Fort Lauderdale.  It was a very trying week.  I had to drive 5 1/2 hours to the wedding from St. Louis and back.  So Randy is an awesome person.  He is a great guy and he was the first person ever to nominate me for MVP.  I knew it probably wasn’t going anywhere, but it was the thought that count.  I met Randy in 2008 at Tulsa Techfest.  We had fun jamming out to Rockband.  I knew he was good people back then.  He has let me vent and I have let him vent over countless personal problems.  He has always been a great friend.  So it was a no brainer when I decided to go to his wedding no matter how much driving or stress or lack of sleep it was worth it.  I am incredibly happy for him to finally find a diamond amongst a lot of coal.  To take part in his celebration was so awesome.  I thank him again for letting me participate in this ceremony. Now after Randy’s wedding I drove 5 1/2 hours landed in St. Louis, fed a cat an asthma pill hidden in wet cat food, and slept for 4 hours.  I immediately saw my best friend who dropped me off at the airport and proceeded to TechEd.  I slept 1 hour on each flight, then ended up working a 3 hour shift at TechEd.  The rest of the week was a haze of connecting with people and sleeping very little.  I got to see my friend Tasha and her husband Casey plus a billion other people.  It was a great week and then I got a call from my grandmother.  It turns out her husband was admitted to the hospital. My grandparents on my dad's side have been divorced since the 60s, which means I never got to see them together.  I always felt like they never cliqued.  When I was a kid we would always spend half our time in Chicago at grandma’s and half our time at grandpa’s houses.  We would hang out with my grandpa’s wife Bobbi and my grandma’s husband Leo.  My cousin’s always called Leo by Pappa and my brother and I would use Leo.  My cousins lived in Chicago up until my cousin Gavi was born then they moved to Philadelphia.  I remember complaining to my dad that we never visited anywhere cool just Chicago and Kansas City.  I also remember Leo teaching me and my brother, Sam, how to climb a tree and play tennis on the back of the apartment wall.  My grandfather’s was kind of stuffy and boring, but we always enjoyed my grandmother’s.  She had games and Disney Movies and toys.  Leo always made it a ton of fun to visit.  I’m not sure a lot of people knew this fact, but when I was 16 years old I saw my grandfather on my mother’s side slowly die of congestive heart failure in a nursing home.  When I was 18 I saw my grandmother on my mother’s side slowly die of an infection over a 30 day period.  I hate hospitals and I hate nursing homes, but sometimes you have to suck in your gut and be strong.  I tried to do that this weekend and I hope I did an ok job.  I really hope things get better, but if they don’t I really appreciate everything he has given me in life.  I am a terrible tennis player and I haven’t climbed a tree in ages, but they both encouraged me when encouragement from my parents was lacking.  It was Father’s Day today and I have to at least pay tribute to Leo Morris, because he has been a good person to me all my life.  Technorati Tags: TechEd,Grandparents,Father's Day

    Read the article

  • The Winds of Change are a Blowin&rsquo;

    - by Ajarn Mark Caldwell
    For six years I have been an avid and outspoken fan and paying customer of SourceGear products…from Vault to Dragnet to Fortress and on to Vault Professional, but that is all changing now.  Not the fan part, but the paying customer part.  I’m still a huge fan.  I think that SourceGear does a great job with their product and support has been fantastic when needed (which is not very often).  I think that Eric Sink has done a fine job building a quality company and products, and I appreciate his contributions to the tech community through this blogging and books.  I still think their products are high quality and do a fantastic job of what they do.  But there’s the rub…what they do is no longer enough for me. As I have rebuilt our development team over the last couple of years, and we have begun to investigate Scrum and Kanban, I realize that I need more visibility into the progress of the team.  I need better project management tools, and this is where Vault Professional lags behind several other tools.  Granted, in the latest release (Vault 6.0) they added a nice time tracking feature, but I want more.  (Note, I did contact SourceGear about my quest for more, but apparently, the rest of their customer base has not been clamoring for this and so they have not built it.  Granted, I wasn’t clamoring for it either until just recently, but unfortunately for SourceGear, I want it now and don’t want to wait for them to build it into their system.) Ironically, it was SourceGear themselves who started to turn me on to the possibilities of other tools.  They built a limited integration with Axosoft OnTime which I read about several times on their support site (I used to regularly read and occasionally comment on their Support Forum).  I decided to check out OnTime and was very impressed with the tool for work item tracking and project management (not to mention their great Scrum Master in 10 Minutes video).  I fell in love with the capabilities of OnTime.  Unfortunately, the integration with Vault for source control management was, as I mentioned, limited.  I could have forfeited the integration between work items and source code, but there is too much benefit to linking check-ins to work items for me to give that up.  So then I did what was previously unthinkable for me, I considered switching not just the work tracking tool, but also the source code management tool.  This was really stepping outside my comfort zone because source code is Gold, and not to be trifled with.  When you find a good weapon to protect your gold, stick with it. I looked at Git and Tortoise SVN, but the integration methods for those was pretty rough compared to what I was used to.  The recommended tool from Axosoft’s point of view appeared to be RocketSVN, but I really wasn’t sure I wanted to go the “flavor of Subversion” route.  Then I started thinking about that other tool I liked back when I first chose to go with Vault, but couldn’t afford:  Team Foundation Server.  And what do you know…Microsoft has not only radically improved it over that version from back in 2006, but they also came to their senses about how it should be licensed, and it is much more affordable now.  So I started looking into the latest capabilities in the 2012 version, and I fell in love all over again. I really went deep on checking out the tools.  I watched numerous webcasts from Microsoft partners, went to a beta preview on Microsoft’s campus, and watched a lot of Channel 9 videos on the new ALM features (oooh…shiny).  Frankly, I was very impressed with the capabilities of the newest version, and figured this was probably our direction.  As an interesting twist of fate, one of my employees crossed paths with an ALM Consultant from Northwest Cadence, a local Microsoft Partner, and one of the companies that produced several of the webcasts that I had been watching.  So I gave Bryon a call and started grilling him to see if he really knew anything or was just another guy who couldn’t find a job so he called himself a consultant.  It turns out Bryon actually knows a lot, especially in an area that was becoming a frustration point for us: Branching strategies and automated builds (that’s probably a whole separate blog entry).  As we talked, Bryon suggested we look into doing a DTDPS (Developer Tools Deployment Planning Services) session with his company.  This is a service that can be paid for by Microsoft Enterprise Agreement planning services credits or SA training benefits, and, again, coincidentally, we had several that were just about to expire, so I put them to good use. The DTDPS sessions were great; and Bryon, Rick, and the rest of the folks at Northwest Cadence have been a pleasure to work with.  We have just purchased a new server for our TFS rollout and are planning the steps and options right now.  This is still a big project ahead of us to not only install and configure TFS, but also to load all of our source code (many different systems, not just one program) and transition to the new way of life with TFS, but I am convinced that it is the right move for my team at this point in time.  We need the new capabilities that are in alignment with Scrum and Kanban methodologies in order to more efficiently manage all the different projects that we have going on at one time. I would still wholeheartedly endorse SourceGear’s products and Axosoft’s OnTime for those whose needs are met by those tools, but for me and my team, I think that TFS is the right fit, and I am looking forward to the change.

    Read the article

  • Copying Columns from Grid to Clipboard in SQL Developer

    - by thatjeffsmith
    There are several ways to get data from a query or a table|view to the clipboard. You know the tried and true, copy and paste. But what if you only want one or more columns, not every column? There are several ways to do this, let’s see if we can’t identify all of them. Write your query to only include the data you want Obvious? Yes. Needed to be said? Definitely. The best tuning tip is to only ask for the data you need, only when you absolutely need it. But let’s look at a few more practical ways to do this. Hide the unwanted columns Mouse right click on an column header. In the context menu, select ‘Columns.’ Hide the columns you don’t want. Copy and paste. WYSIWYG Grids, Hide Columns and Filter Rows Mouse select the columns Obvious, but a bit painful. For a very large dataset, you’ll be holding down the Shift and PageDown buttons – but it works. Remember to use Ctrl+Shift+C to get the column headers with the data. Use the Export Wizard This used to be called ‘Unload’ – agreed, not a great name. So, we changed it. In a grid, right mouse click on the data, and on the context menu, select ‘Export…’ Select your format – I suggest ‘delimited’ or ‘fixed’ for copying data to the clipboard. You can export to the clipboard, yes you can! Click ‘Next.’ Click in the Columns dialog, and choose the columns you want copied. Trim the columns you don't want copied Click ‘Finish.’ Alt or Ctrl tab to your window or application of choice. And Paste! "FIRST_NAME" "LAST_NAME" "Donald" "OConnell" "Douglas" "Grant" "Jennifer" "Whalen" "Pat" "Fay" "Susan" "Mavris" "William" "Gietz" "Alexander" "Hunold" "Bruce" "Ernst" "David" "Austin" "Valli" "Pataballa" "Diana" "Lorentz" "Daniel" "Faviet" "John" "Chen" "Ismael" "Sciarra" "Jose Manuel" "Urman" "Luis" "Popp" "Alexander" "Khoo" "Shelli" "Baida" "Sigal" "Tobias" "Guy" "Himuro" "Karen" "Colmenares" "Matthew" "Weiss" "Adam" "Fripp" "Payam" "Kaufling" "Shanta" "Vollman" "Kevin" "Mourgos" "Julia" "Nayer" "Irene" "Mikkilineni" ... There’s probably at least 2 or 3 more ways, but… But, try these and let me know how we can improve things. I’ve already gotten a request to be able to include the SQL text used to populate the dataset on the the copy to clipboard, and it’s now on our to-do list

    Read the article

  • On checking is a port open on the firewall?

    - by [email protected]
    Hi, well sometimes DBAs and sysadmin need to check if a particular port is "open" on the corporate firewall --i.e. *Grid Control* Will the communication between OMS and a management agent work? --One solution well consist on deploying the piece of software in question, start it and just check if everything works fine, however i find more classy trying to get that information beforeThere are several tools for doing so --i.e. nmap *like Trinity on The Matrix*, but just found a nice piece of code for establishing a socket on a parameter passed port.After running the program doing a telnet from the client machine  will be a walk in the park Normal 0 21 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} #include <stdio.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> int main(int argc, char *argv[]) {      int sockfd, newsockfd, portno, clilen;      char buffer[256];      struct sockaddr_in serv_addr, cli_addr;      int n;      if (argc < 2) {          fprintf(stderr,"ERROR: A port must be provided. Aborting ...\n");          return 1;      }      sockfd = socket(AF_INET, SOCK_STREAM, 0);      if (sockfd < 0)          {         fprintf("ERROR: Unable to open socket. Aborting ...\n");         return 1;       }      portno = atoi(argv[1]);      serv_addr.sin_family = AF_INET;      serv_addr.sin_addr.s_addr = INADDR_ANY;      serv_addr.sin_port = htons(portno);      if (bind(sockfd, (struct sockaddr *) &serv_addr,sizeof(serv_addr)) < 0)          {               fprintf("ERROR: Unable to bind socket. Aborting ...\n");               return 1;       }      listen(sockfd,5);      clilen = sizeof(cli_addr);      newsockfd = accept(sockfd, (struct sockaddr *) &cli_addr,&clilen);      if (newsockfd < 0)          {           fprintf("ERROR: Unable to accept connection. Aborting...\n");           return 1;        }      return 0; }Of course, you can still ask to the network guy if the port is open or notHope it helpsL

    Read the article

  • My History with Agile

    - by Robert May
    I’m going to write my history with Agile here.  That way, in future posts, I can refer back to it, instead of typing it out in the post that contains information you may actually want to read.  Note that I’m actually a pretty senior developer, and do lots of technical interviews.  I’m an Agile fan because of the difference it makes in peoples lives and the improvement in quality it brings, and I’ll sacrifice my technological advance to help teams. Management History I started management pretty early in my career, starting with the first job that I ever had.  I actually do NOT have a CS or similar degree.  I have a Bachelor’s of Business Administration with an emphasis in Computer Information Systems. My first management gigs were around call center work and were very schedule oriented.  I didn’t understand the true value of teams, and I’m ashamed to admit, I actually installed a fingerprint scanner as a time clock in this job.  I shudder to think of the impact that I had on the team spirit.  I didn’t even trust them enough to fill out their time cards correctly.  How sad. I was managing nearly 100 people in this position, with the help of a great set of subordinates. I did try to come up with reward programs for the team, but again, didn’t understand the concept of team, so instead of letting the team determine how the rewards should work, I mandated from on high, which isn’t a good thing. I was told that I wasn’t the type that would be a good manager by people whom I respected a lot.  They said it because I was a computer geek, since they don’t understand good management either, but in retrospect, they were right about me then.  I was too green. After my first job, I went on to other jobs and with the exception of one job, I’ve managed people at them all.  The rest of the management story is important for understanding agile, so I’ll save it for my next post. Technical History I’ve been in software development for many, many years.  I technically started programming on a commodore 64 in basic.  I didn’t know that I was programming, but I was sure having fun.  That was followed by batch files, Gorilla hacking (I always had to win), WordPerfect Macro programming and other things that taught me the basics. My first “real” job was with a telephone company, and that’s where I made my first database application in DataEase, wrote my first VBA app and started using real programming tools, like turbo pascal, vb3-vb5, and semi-real tools like RPG and VisualRPG.  I wrote my first web page in 1994, and built my first data driven web page in 1995 using perlDB.  You really can do anything with Perl.  At this time, I also started a Linux based internet service provider that is still in operation today.  One of the people I worked with is now a Microsoft employee building and designing frameworks you probably know well.  Smart guy.  I also built my first ASP applications connecting to Sql Server 6.5, setup Exchange 5.5 for the company, and many other system administration stuff.  I’m a programmer by choice, mostly because I don’t really like PC support. From there, I went on to a large state agency.  I got to see and maintain true waterfall projects.  5 years of maintaining the 200 VB COM+ (MTS, actually) dlls that were used to calculate a single number is a long time.  That was all Microsoft DNS technologies.  SQL Server and VB6 were the tools of choice, although .net started to be a factor near the end of employment.  I did some heavy XML work at this job and even wrote an XSD parser and validator in VB6 that was a shim until MSXML 3.0 came out.  Prior to 3.0, XSD’s weren’t supported, and I didn’t want to write DTDs. Ironically, jobs after this were more generic.  I pretty much settled in on the .net framework and revisions of it.  Lots of WPF, some silverlight, lots of ASP.NET, some SQL Azure, lots of SQL Server, some Oracle, but I don’t think that I was as passionate about development and technologies.  I was more into the management of development.  I like people. Technorati Tags: Agile,history

    Read the article

  • My Tech Ed North America Preview - Content Edition

    - by Chris Gardner
    As I promised in my last post, I feel the need to give you a rundown on all the technical content I am looking forward to checking out at Tech Ed this year. We shall start with the content I know I'll be able to see. This would be some demo stations in the Technical Learning Center. I will DEFINITELY be checking out the Windows Phone Device Bar. I will admit that I am a bit of a phone snob, and I just want to manhandle all that sexy, sexy tech. I am also planning on talking to the Windows Phone team and the Azure team. Year after year, I end up spending more time in either the TLC or taking certification tests than anywhere else. This leads me to the one "Exam Cram" session I hope to attend. There is a session to cram for 70-599: Designing and Developing Windows Phone Applications. I know this seems odd. I'm (sort of) an XNA guru. However, I'm not that up on my Silverlight. I know enough to add Silverlight to an XNA project. Now, let's talk breakout sessions. We always need to keep track of where we're going. I know, I talk about solving problems over forcing buzz words. However, it is important to know what those buzz words before you tell people not to use them. For this, we will look to the "What's New in Visual Studio 11" and "What's New in Microsoft .NET Framework 4.5." Of course, we do talk bad about buzz words around here. For this, I'm really looking forward to "Visual C#/Visual Basic: Becoming a Guru with Existing Features." I still have .NET 2 tricks that are crucial to my internal libraries. In depth knowledge will NEVER trump shortcut libraries. There is a session in ASP.NET for phones and tablets. For those of you that have not tried to use ASP.NET on a mobile device, there is one thing you really need to understand. Mobile devices don't use scroll bars. That's right; the thing with the least screen real estate doesn't use scroll bars. Thus, I am hoping this session will give some good advice in having an ASP.NET site target both mobile and desktop. The last "business only" session "The Accidental Team Foundation Server Admin." T & W Operations is a VERY small business. As such, I am the TFS admin because I'm the developer that is also the SQL Guru. I keep my server up, but it'd be nice to know some really cool tricks for the part time guy. This leads us the the fun sessions. Coding4Fun has a Kinect session. The Twitter followers will remember that I now have a Kinect for Windows sitting on my desk at work. I have gotten pretty handy with the device, but I KNOW I'm missing some good stuff. Finally, we come to Brian Prince's session on "Making Crazy Money with Games and the Cloud." Never mind the fact that we're using Azure at work. Never mind the fact that I'm actually using the cloud in a game. Never mind the fact that the session has the terms "Crazy Money" and "Games" in the title. If you've never seen Brian Prince speak, you're missing out. In the Hands-on-Labs, we are not allowed to make our own schedule. Instead, we're asked what sessions we can't miss, and they try to schedule around those times. This was the one session I said I couldn't miss. This should complete the technical content for the conference. Coming soon, I'll dig into the certifications I hope to attain. Then, we'll talk about the social activities for the week. Here's a preview of that. I am a member of The Krewe...

    Read the article

  • Simultaneously calling multiple methods on a WCF service from silverlight

    - by ola karlsson
    A while back I had to debug some performance issues in an existing Silverlight app, as the problem / solution was a bit obscure and finding info about it was quite tricky, I thought I’d share, maybe it can help the next person with this problem. The App On start, the app would do a number of calls to different methods on a WCF service, this to populate the UI with the necessary data. Recently one of those services had been changed and was now taking quite a bit longer than it used to. This was resulting in quite a long loading time for the whole UI, which was set up so it wouldn’t let the user interact with anything, until all the service calls had finished. First I broke out the longer running service call from the others, then removed the constraint that it had to be loaded for the UI in general to become responsive. I also added a loading indicator just on that area of the UI, thinking that the main UI would load while this particular section could keep loading independently. The Problem However this is where things started to get a bit strange. I found that even after these changes, the main UI wouldn’t activate until the long running call returned. So now, I did what I should have done to start with, I got Fiddler out and had a look at what was really happening. What I found was that, once the call to the long running service method was placed, all subsequent call were waiting for that one to return before executing. Not having really worked with WCF previously or knowing much about it in general, I was stumped… I knew of the issues where Silverlight is restricted by the browsers networking features in regards to number of simultaneous connections etc. However that just didn’t seem to be the issue here, you can clearly see in Fiddler that there’s numerous calls, but they’re just not returning. I thought of the problem maybe being in the WCF service, but the calls were really not that complicated and surely the service should be able to handle a lot more than what I was throwing at it! So I did what every developer does in this type of scenario, I hit the search engines. I did a whole bunch of searching on things like “multiple simultaneous WCF calls from Silverlight” and “Calling long running WCF services from Silverlight” etc. etc. This however, pretty much got me nowhere, I found a whole heap of resources on how to do WCF calls from Silverlight but most of them were very basic and of no use what so ever. The fog is clearing It wasn’t until I came across the term “ WCF blocking calls” and started incorporating that in my searches I started to get somewhere. Those searches quite quickly brought me to the following thread in the Silverlight forum “Long-running WCF call blocking subsequent calls” which discussed the exact problem I was facing and the best part, one of the guys there had the solution! The short answer is in the forum post and the guys answering, has also done a more extensive blog post about it called “Silverlight, WCF, and ASP.Net Configuration Gotchas” which covers it very well.  So come on what’s the solution?! I heard you ask, unless you’ve already gone to the links and looked it up ;) The Solution Well, it turns out that the issue is founded in a mix of Silverlight, Asp.Net and WCF, basically if you’re doing multiple calls to a single WCF web-service and you have Asp.Net session state enabled, the calls will be executed sequentially by the service, hence any long running calls will block subsequent ones. So why is Asp.Net session state effecting us, we’re working in Silverlight, right? We'll as mentioned earlier, by default Silverlight uses the browsers networking stack when doing service calls, hence to the WCF service, the call looks like it might as well be coming from a normal Asp.Net. To get around this, we look to a feature introduced in Silverlight 3, namely the Client HTTP Stack. The Client HTTP Stack to the rescue By using the following syntax (for example in our App.xaml.cs, Application_Startup method) WebRequest.RegisterPrefix("http://", WebRequestCreator.ClientHttp); we can set our Silverlight application to use the Client HTTP Stack, which incidentally solves our problem! By using Silverlights own networking stack, rather than that of the browser, we get around the Asp.Net - WCF session state issue. The above code specifies that all calls to addresses starting with “http://” should go through the client stack, this can actually be set more granular and you can specify it to be used only for certain domains etc. Summary The actual solution is well covered in the forum and blog posts I link to above. This post is more about sharing my experience, hopefully helping to spread the word about this and maybe make it a bit easier for the next poor guy with this issue to find the solution. Until next time, Ola

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >