Search Results

Search found 432 results on 18 pages for 'blind fish'.

Page 14/18 | < Previous Page | 10 11 12 13 14 15 16 17 18  | Next Page >

  • Why do I have 55 local area connections in ipconfig?

    - by RMorrisey
    Windows Vista Home Premium. I should mention that I am having no problem whatever getting an internet connection. When I type "ipconfig" in the console, I get (55!) messages of 3 lines each, listing a ton of disconnected network connections. My PC only has one network card. Each message looks like this: Tunnel adapter Local Area Connection* 55: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : These don't cause a major problem; they make it a pain, though, to fish upward and find my IP address. How can I get rid of them? Edit: Actually, a few connection numbers are randomly missing from the sequence; so, it's really more like 30 or 40 connection messages, rather than all 55. Not sure why that is, either.

    Read the article

  • SQL Server 2008 second instance times out when logging in -- but only the first time?

    - by Kromey
    This is a strange one that has plagued me for a while now. When logging in to the second instance of SQL Server 2008 on one of our database servers, we get a timeout error: Cannot connect to servername\mssqlserver2. Additional information: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. (Microsoft SQL Server)") (This is the error message when trying to connect with Microsoft SQL Server Management Studio; other tools experience the same error, but of course say it in different ways.) Immediately re-attempting to log in works just fine, so whatever the cause it's ephemeral! This happens regardless of user or authentication type (both Windows and SQL Server authentication methods are supported on this instance). What's even weirder, though, is that the first instance on this server has never once demonstrated this problem. Server is a Windows Server 2008 R2 virtual server, hosted in Microsoft Hyper-V (host is likewise Server 2008 R2). The server has 2GB of RAM, and seems to regularly be using 90% of that -- could low memory be the cause of this issue? I could see this second instance -- which is not used very often yet -- being swapped out to disk, and then taking too long to load back into memory to respond in time to the connection request, but I'd rather have more than just my own hunch before I go scheduling a downtime for this server (the first instance is used regularly) and then just throwing extra resources at it in the blind hope that the problem goes away.

    Read the article

  • Windows XP over two monitors, but one of them switches off at boot... how to fix? How to switch bac

    - by jae
    When booting into XP (x64, Athlon II X2 245, 4GB RAM), my main monitor (got two 19" TFTs hooked up, two gfx cards, a 4650 (1GB, the primary monitor's on this) and a 4350 (512MB)) switches off. Logging in blind (cursor down key, typing password) gets me one screen, the secondary. Booted correctly until about two days ago. No clue what's the cause, last change was (if I don't overlook something) installing the ATI 9-12 hotfix. And booting into Windows 7, after returning from 7, it was like this. For some weird reason, I cannot start Catalyst Control Center (I right-click desktop, choose the CCC entry, the pointer changes to hourglass for a half-second... and nothing. Likewise with "Properties"... I think, as all windows open on the primary (off) screen, and no entry appears in the task bar for Properties) Completely stumped. Windows 7, same setup, works w/o a hitch. The primary monitor appears to run in some unknown, but pretty low, resolution, as the mouse pointer only moves onto it at about half-height. But, w/o CCC or display properties, I cannot check. And, obviously, not change anything. Hope this was not too long-winded. And I'm sure I still forgot essential stuff. :P

    Read the article

  • Wiring my internet

    - by u8sand
    I have Verizon internet service and am currently using wifi. My router is in the basement and my desktop computer is 2 floors and on the other side of the house above it... Worst possible positioning but that's just how things worked out. My wireless currently is extremely unstable so I've decide to correct the problem by wiring my computer directly. The problem lies here: when redoing the room next to it (when the wall was open) we went ahead and wired some coaxial cable from our attic to our basement (with plenty of slack on both ends, don't ask me why we didn't go ahead and wire a CAT6 cable). The question is: Can I use the coaxial cable to bring me internet connection? Naturally the router (which needs to stay where it is) takes a coaxial cable input and has Ethernet outputs. So maybe I would have to take a ethernet cable, convert to coaxial-coaxial to my computer, convert back to ethernet. Is this even possible to convert from coaxial to ethernet? Or do I have to attempt to go ahead and fish a cat6 cable through my house. I cannot just split the signal because that would require two routers and two networks (which I don't believe would work with one cable-one ISP correct me if I'm wrong). Thanks

    Read the article

  • Sending mail results in "Sender address rejected: Domain not found"

    - by user1281413
    The setup: WHM/CPanel CentOS 5 server running Exim and Courier for mail services, and BIND for domain name services. I recently moved servers. The old server was running a HIGHLY similar configuration, and all accounts were ported via WHM. However, the server is unable to send, and sometimes receive email. Errors I am seeing (when I do get an error mail back) state: 450 4.1.8 : Sender address rejected: Domain not found Edit for clarity: this is the error response from remote mail servers. Numerous independent mail servers come back with the same error. (Email address is merely one valid example) My first instinct of course was to check the domain records. However, k-t.org appears to have a valid record (including an MX record), even after running it through domain checks on a completely different server elsewhere and online. Note that the issue appears to happen with all the domains hosted on the server, not just k-t.org I have also ensured that a PTR was created. My Googling has only lead me to people who had fairly basic DNS mistakes, but either I'm blind/dumb (possible, DNS is not my strong suite), or it's something that is a bit more archaic. I've run out of ideas, and I can't seem to find anything that could explain why servers are unable to resolve the domains. There doesn't seem to be anything missing or incorrect.

    Read the article

  • How to setup a virtual machine in Ubuntu desktop to run Debian Server

    - by stickman
    I want to run a virtual machine in my Ubuntu desktop that runs a Debian server. The purpose of this is to generate Debian packages. I have some C++ applications that were originally developed on my Ubuntu machine, and I need to (re)compile them on a Debian server in order to: build Deb packages for deployment on a Debian server make sure that the applications will definitely work on a debian server The idea is so that I can do 90% of my development on Ubuntu (where I am more comfortable), and deploy a binary package that definitely works on Debian. BTW, I am developing on Karmic Kola (Ubuntu 9.10). [Edit] Following the advice I got so far, I have installed debootstrap and Debian 'Lenny' on /srv/chroot/debian_lenny on my machine. I am not sure this is the server version, but in any case I dont think that matters for my purposes (though it would be useful to know how to specifically install the server version). At the moment though, I am like a fish out of water, since there is no GUI, and it is only a console that I have in the chroot jail. I had a look in the home folder (I cheated, by using the KNavigator in Ubuntu), and there are no folders there - which presumably mean that no users have been set up as yet in the Debian "system". I would like to know how to do the following: Download and install the dev tools needed for (re)compiling my C++ apps Copy my projects from the Ubuntu "system" to the Debian "system" After building the binaries, I would like to create a debian binary package containing all of my binaries, so that I can install the package on a Debian server (my remote server)

    Read the article

  • Exchange 2010 Transport rules stepping on each other

    - by TopHat
    I have a group of users that I have to restrict email access for and so far using Exchange Transport Rules has worked very well. The problem I am having is that Rule 0 is supposed to bcc the email to a review mailbox but otherwise not change anything and Rule 9 is supposed to block the email and throw a custom NDR to tell the user why they were blocked. Here are my results in practice however. If Rule 0 is enabled and Rule 9 is enabled then only Rule 9 functions If Rule 0 is disabled and Rule 9 is enabled then Rule 9 functions If Rule 0 is enabled and Rule 9 is disabled then Rule 0 functions This is after the Transport Service has been restarted (multiple times actually). I have other rule pairs that work correctly. None of these are overlapping rulesets however. - copy email going to address outside domain and then block - copy email coming in from outside and then block Here is the rule for copying internal emails (Rule 0): Apply rule to messages from a member of Blind carbon copy (Bcc) the message to except when the message is sent to a member of or [email protected] Here is the rule to block the same email (rule 9): Apply rule to messages from a member of send 'Email to non-supervisors or managers has been prohibited. Please contact your supervisor for more information.' to sender with 5.7.420 except when the message is sent to , [email protected], The distribution group used for membership in these rules is used for the other blocking and copying rules and works as expected. Is there something I missed in this setup? All of the copy rules are at the front of the transport rule group and all the actual copies at at the end of the queue if that makes a difference. Any thoughts as to why the email doesn't get copied when it gets blocked?

    Read the article

  • Multiple Settings for One Monitor

    - by Lynda
    I have my monitor set to the way I like it, it shows colors accurately and not overly bright (white doesn't blind me). However I edit photos for the web and this is great for my screen but I have noticed when looking at other screens my photos will at times looked washed out and not a good presentation. If I manually change my monitor settings to be brighter (and a few other tweaks) I can replicate the look of other monitors close enough to help prevent washed out/blown out photos. Here is the issue, I need to be able to change settings between the two fairly easily. To manually set my monitor for one then back to the other can be time consuming and a pain to set right every time. I would like to be able to preset certain settings (in software) that I can quickly switch between by pressing hotkeys or clicking shortcut on my desktop. I have an AMD video card and have attempted to use the AMD VISION Engine Control Center to add presets then tweak the settings for each. However when I switch between the presets the settings are the same. How do I build presets using the VISION Engine Control Center that actually works (maybe I misunderstand what it is saving?) or is there a way to do this in Windows 7 or free software that actually works decently for this purpose?

    Read the article

  • URL autocomplete no longer working in Chrome

    - by Yuji Tomita
    The browser URL autocomplete has started behaving differently starting yesterday. I used to access my top urls by typing the first one or two letters of a URL then pressing enter. Now, I have to visually fish for the right one and push the down arrow to select the url. Big difference. Anybody know if I can get the old functionality back somehow? Have I messed a setting? Example of how my browser used to work: Gmail.com: CMD + L Type G Enter Stackoverflow.com CMD + L Type S Enter Normally, the browser bar would already be highlighted with gmail.com after typing the first g. It would narrow the matches depending on what characters were typed next, or simply go to it if I pressed enter. UPDATE: I just realized my history tab looks suspicious. No entries But clearly Chrome is pulling some data from my history, as I have very personalized recommendations when typing in a letter. UPDATE: Fixed! Saved my bookmarks, removed my ~/Library/Application\ Support/Google/Default directory (careful, it looks like absolutely everything is stored here) restarted chrome, and within one visit to Gmail.com, my autocomplete was filling in my URLs like so: Beautiful.

    Read the article

  • Fedora 19 no longer bootable

    - by Parisa
    I had fedora dual-booted with windows on my laptop for a while but with windows refresh grub was gone and my system directly booted windows. I booted fedora with my systems boot options and with this tutorial: https://fedoraproject.org/wiki/GRUB_2 I reinstalled grub2 but then had my system booted into an empty grub prompt: grub So I found the drive containing vmlinuz and initramfs (completely sure about thair location and versions) and tried to manually boot it but after the boot command it said: no suitable video mode found booting in blind mode and nothing happened. Such a tragedy... I have already tried to use live disks rescue system. Funny but troubleshooting options don't apear on my laptop while they do on my desktop pc. I cant even go to boot prompt on my lenovo idepad z400 laptop. I also tried EasyBCD so maybe I could boot it with windows but it comes up with this error: missing AutoNeoGrub().mbr Now I have removed the grub prompt (don't know why) and its really hard for me to reinstall my dearly customized fedora. If anyone knows a way to help boot it again or reinstall it keeping my files and installations I really need it. Thanks PS:I have already tried Boot-repair Disk but it asks me to enable the repo containing grub-efi on my fedora to reinstall the grub2 and fix the boot for me (how could i?).

    Read the article

  • Red Gate Coder interviews: Alex Davies

    - by Michael Williamson
    Alex Davies has been a software engineer at Red Gate since graduating from university, and is currently busy working on .NET Demon. We talked about tackling parallel programming with his actors framework, a scientific approach to debugging, and how JavaScript is going to affect the programming languages we use in years to come. So, if we start at the start, how did you get started in programming? When I was seven or eight, I was given a BBC Micro for Christmas. I had asked for a Game Boy, but my dad thought it would be better to give me a proper computer. For a year or so, I only played games on it, but then I found the user guide for writing programs in it. I gradually started doing more stuff on it and found it fun. I liked creating. As I went into senior school I continued to write stuff on there, trying to write games that weren’t very good. I got a real computer when I was fourteen and found ways to write BASIC on it. Visual Basic to start with, and then something more interesting than that. How did you learn to program? Was there someone helping you out? Absolutely not! I learnt out of a book, or by experimenting. I remember the first time I found a loop, I was like “Oh my God! I don’t have to write out the same line over and over and over again any more. It’s amazing!” When did you think this might be something that you actually wanted to do as a career? For a long time, I thought it wasn’t something that you would do as a career, because it was too much fun to be a career. I thought I’d do chemistry at university and some kind of career based on chemical engineering. And then I went to a careers fair at school when I was seventeen or eighteen, and it just didn’t interest me whatsoever. I thought “I could be a programmer, and there’s loads of money there, and I’m good at it, and it’s fun”, but also that I shouldn’t spoil my hobby. Now I don’t really program in my spare time any more, which is a bit of a shame, but I program all the rest of the time, so I can live with it. Do you think you learnt much about programming at university? Yes, definitely! I went into university knowing how to make computers do anything I wanted them to do. However, I didn’t have the language to talk about algorithms, so the algorithms course in my first year was massively important. Learning other language paradigms like functional programming was really good for breadth of understanding. Functional programming influences normal programming through design rather than actually using it all the time. I draw inspiration from it to write imperative programs which I think is actually becoming really fashionable now, but I’ve been doing it for ages. I did it first! There were also some courses on really odd programming languages, a bit of Prolog, a little bit of C. Having a little bit of each of those is something that I would have never done on my own, so it was important. And then there are knowledge-based courses which are about not programming itself but things that have been programmed like TCP. Those are really important for examples for how to approach things. Did you do any internships while you were at university? Yeah, I spent both of my summers at the same company. I thought I could code well before I went there. Looking back at the crap that I produced, it was only surpassed in its crappiness by all of the other code already in that company. I’m so much better at writing nice code now than I used to be back then. Was there just not a culture of looking after your code? There was, they just didn’t hire people for their abilities in that area. They hired people for raw IQ. The first indicator of it going wrong was that they didn’t have any computer scientists, which is a bit odd in a programming company. But even beyond that they didn’t have people who learnt architecture from anyone else. Most of them had started straight out of university, so never really had experience or mentors to learn from. There wasn’t the experience to draw from to teach each other. In the second half of my second internship, I was being given tasks like looking at new technologies and teaching people stuff. Interns shouldn’t be teaching people how to do their jobs! All interns are going to have little nuggets of things that you don’t know about, but they shouldn’t consistently be the ones who know the most. It’s not a good environment to learn. I was going to ask how you found working with people who were more experienced than you… When I reached Red Gate, I found some people who were more experienced programmers than me, and that was difficult. I’ve been coding since I was tiny. At university there were people who were cleverer than me, but there weren’t very many who were more experienced programmers than me. During my internship, I didn’t find anyone who I classed as being a noticeably more experienced programmer than me. So, it was a shock to the system to have valid criticisms rather than just formatting criticisms. However, Red Gate’s not so big on the actual code review, at least it wasn’t when I started. We did an entire product release and then somebody looked over all of the UI of that product which I’d written and say what they didn’t like. By that point, it was way too late and I’d disagree with them. Do you think the lack of code reviews was a bad thing? I think if there’s going to be any oversight of new people, then it should be continuous rather than chunky. For me I don’t mind too much, I could go out and get oversight if I wanted it, and in those situations I felt comfortable without it. If I was managing the new person, then maybe I’d be keener on oversight and then the right way to do it is continuously and in very, very small chunks. Have you had any significant projects you’ve worked on outside of a job? When I was a teenager I wrote all sorts of stuff. I used to write games, I derived how to do isomorphic projections myself once. I didn’t know what the word was so I couldn’t Google for it, so I worked it out myself. It was horrifically complicated. But it sort of tailed off when I started at university, and is now basically zero. If I do side-projects now, they tend to be work-related side projects like my actors framework, NAct, which I started in a down tools week. Could you explain a little more about NAct? It is a little C# framework for writing parallel code more easily. Parallel programming is difficult when you need to write to shared data. Sometimes parallel programming is easy because you don’t need to write to shared data. When you do need to access shared data, you could just have your threads pile in and do their work, but then you would screw up the data because the threads would trample on each other’s toes. You could lock, but locks are really dangerous if you’re using more than one of them. You get interactions like deadlocks, and that’s just nasty. Actors instead allows you to say this piece of data belongs to this thread of execution, and nobody else can read it. If you want to read it, then ask that thread of execution for a piece of it by sending a message, and it will send the data back by a message. And that avoids deadlocks as long as you follow some obvious rules about not making your actors sit around waiting for other actors to do something. There are lots of ways to write actors, NAct allows you to do it as if it was method calls on other objects, which means you get all the strong type-safety that C# programmers like. Do you think that this is suitable for the majority of parallel programming, or do you think it’s only suitable for specific cases? It’s suitable for most difficult parallel programming. If you’ve just got a hundred web requests which are all independent of each other, then I wouldn’t bother because it’s easier to just spin them up in separate threads and they can proceed independently of each other. But where you’ve got difficult parallel programming, where you’ve got multiple threads accessing multiple bits of data in multiple ways at different times, then actors is at least as good as all other ways, and is, I reckon, easier to think about. When you’re using actors, you presumably still have to write your code in a different way from you would otherwise using single-threaded code. You can’t use actors with any methods that have return types, because you’re not allowed to call into another actor and wait for it. If you want to get a piece of data out of another actor, then you’ve got to use tasks so that you can use “async” and “await” to await asynchronously for it. But other than that, you can still stick things in classes so it’s not too different really. Rather than having thousands of objects with mutable state, you can use component-orientated design, where there are only a few mutable classes which each have a small number of instances. Then there can be thousands of immutable objects. If you tend to do that anyway, then actors isn’t much of a jump. If I’ve already built my system without any parallelism, how hard is it to add actors to exploit all eight cores on my desktop? Usually pretty easy. If you can identify even one boundary where things look like messages and you have components where some objects live on one side and these other objects live on the other side, then you can have a granddaddy object on one side be an actor and it will parallelise as it goes across that boundary. Not too difficult. If we do get 1000-core desktop PCs, do you think actors will scale up? It’s hard. There are always in the order of twenty to fifty actors in my whole program because I tend to write each component as actors, and I tend to have one instance of each component. So this won’t scale to a thousand cores. What you can do is write data structures out of actors. I use dictionaries all over the place, and if you need a dictionary that is going to be accessed concurrently, then you could build one of those out of actors in no time. You can use queuing to marshal requests between different slices of the dictionary which are living on different threads. So it’s like a distributed hash table but all of the chunks of it are on the same machine. That means that each of these thousand processors has cached one small piece of the dictionary. I reckon it wouldn’t be too big a leap to start doing proper parallelism. Do you think it helps if actors get baked into the language, similarly to Erlang? Erlang is excellent in that it has thread-local garbage collection. C# doesn’t, so there’s a limit to how well C# actors can possibly scale because there’s a single garbage collected heap shared between all of them. When you do a global garbage collection, you’ve got to stop all of the actors, which is seriously expensive, whereas in Erlang garbage collections happen per-actor, so they’re insanely cheap. However, Erlang deviated from all the sensible language design that people have used recently and has just come up with crazy stuff. You can definitely retrofit thread-local garbage collection to .NET, and then it’s quite well-suited to support actors, even if it’s not baked into the language. Speaking of language design, do you have a favourite programming language? I’ll choose a language which I’ve never written before. I like the idea of Scala. It sounds like C#, only with some of the niggles gone. I enjoy writing static types. It means you don’t have to writing tests so much. When you say it doesn’t have some of the niggles? C# doesn’t allow the use of a property as a method group. It doesn’t have Scala case classes, or sum types, where you can do a switch statement and the compiler checks that you’ve checked all the cases, which is really useful in functional-style programming. Pattern-matching, in other words. That’s actually the major niggle. C# is pretty good, and I’m quite happy with C#. And what about going even further with the type system to remove the need for tests to something like Haskell? Or is that a step too far? I’m quite a pragmatist, I don’t think I could deal with trying to write big systems in languages with too few other users, especially when learning how to structure things. I just don’t know anyone who can teach me, and the Internet won’t teach me. That’s the main reason I wouldn’t use it. If I turned up at a company that writes big systems in Haskell, I would have no objection to that, but I wouldn’t instigate it. What about things in C#? For instance, there’s contracts in C#, so you can try to statically verify a bit more about your code. Do you think that’s useful, or just not worthwhile? I’ve not really tried it. My hunch is that it needs to be built into the language and be quite mathematical for it to work in real life, and that doesn’t seem to have ended up true for C# contracts. I don’t think anyone who’s tried them thinks they’re any good. I might be wrong. On a slightly different note, how do you like to debug code? I think I’m quite an odd debugger. I use guesswork extremely rarely, especially if something seems quite difficult to debug. I’ve been bitten spending hours and hours on guesswork and not being scientific about debugging in the past, so now I’m scientific to a fault. What I want is to see the bug happening in the debugger, to step through the bug happening. To watch the program going from a valid state to an invalid state. When there’s a bug and I can’t work out why it’s happening, I try to find some piece of evidence which places the bug in one section of the code. From that experiment, I binary chop on the possible causes of the bug. I suppose that means binary chopping on places in the code, or binary chopping on a stage through a processing cycle. Basically, I’m very stupid about how I debug. I won’t make any guesses, I won’t use any intuition, I will only identify the experiment that’s going to binary chop most effectively and repeat rather than trying to guess anything. I suppose it’s quite top-down. Is most of the time then spent in the debugger? Absolutely, if at all possible I will never debug using print statements or logs. I don’t really hold much stock in outputting logs. If there’s any bug which can be reproduced locally, I’d rather do it in the debugger than outputting logs. And with SmartAssembly error reporting, there’s not a lot that can’t be either observed in an error report and just fixed, or reproduced locally. And in those other situations, maybe I’ll use logs. But I hate using logs. You stare at the log, trying to guess what’s going on, and that’s exactly what I don’t like doing. You have to just look at it and see does this look right or wrong. We’ve covered how you get to grip with bugs. How do you get to grips with an entire codebase? I watch it in the debugger. I find little bugs and then try to fix them, and mostly do it by watching them in the debugger and gradually getting an understanding of how the code works using my process of binary chopping. I have to do a lot of reading and watching code to choose where my slicing-in-half experiment is going to be. The last time I did it was SmartAssembly. The old code was a complete mess, but at least it did things top to bottom. There wasn’t too much of some of the big abstractions where flow of control goes all over the place, into a base class and back again. Code’s really hard to understand when that happens. So I like to choose a little bug and try to fix it, and choose a bigger bug and try to fix it. Definitely learn by doing. I want to always have an aim so that I get a little achievement after every few hours of debugging. Once I’ve learnt the codebase I might be able to fix all the bugs in an hour, but I’d rather be using them as an aim while I’m learning the codebase. If I was a maintainer of a codebase, what should I do to make it as easy as possible for you to understand? Keep distinct concepts in different places. And name your stuff so that it’s obvious which concepts live there. You shouldn’t have some variable that gets set miles up the top of somewhere, and then is read miles down to choose some later behaviour. I’m talking from a very much SmartAssembly point of view because the old SmartAssembly codebase had tons and tons of these things, where it would read some property of the code and then deal with it later. Just thousands of variables in scope. Loads of things to think about. If you can keep concepts separate, then it aids me in my process of fixing bugs one at a time, because each bug is going to more or less be understandable in the one place where it is. And what about tests? Do you think they help at all? I’ve never had the opportunity to learn a codebase which has had tests, I don’t know what it’s like! What about when you’re actually developing? How useful do you find tests in finding bugs or regressions? Finding regressions, absolutely. Running bits of code that would be quite hard to run otherwise, definitely. It doesn’t happen very often that a test finds a bug in the first place. I don’t really buy nebulous promises like tests being a good way to think about the spec of the code. My thinking goes something like “This code works at the moment, great, ship it! Ah, there’s a way that this code doesn’t work. Okay, write a test, demonstrate that it doesn’t work, fix it, use the test to demonstrate that it’s now fixed, and keep the test for future regressions.” The most valuable tests are for bugs that have actually happened at some point, because bugs that have actually happened at some point, despite the fact that you think you’ve fixed them, are way more likely to appear again than new bugs are. Does that mean that when you write your code the first time, there are no tests? Often. The chance of there being a bug in a new feature is relatively unaffected by whether I’ve written a test for that new feature because I’m not good enough at writing tests to think of bugs that I would have written into the code. So not writing regression tests for all of your code hasn’t affected you too badly? There are different kinds of features. Some of them just always work, and are just not flaky, they just continue working whatever you throw at them. Maybe because the type-checker is particularly effective around them. Writing tests for those features which just tend to always work is a waste of time. And because it’s a waste of time I’ll tend to wait until a feature has demonstrated its flakiness by having bugs in it before I start trying to test it. You can get a feel for whether it’s going to be flaky code as you’re writing it. I try to write it to make it not flaky, but there are some things that are just inherently flaky. And very occasionally, I’ll think “this is going to be flaky” as I’m writing, and then maybe do a test, but not most of the time. How do you think your programming style has changed over time? I’ve got clearer about what the right way of doing things is. I used to flip-flop a lot between different ideas. Five years ago I came up with some really good ideas and some really terrible ideas. All of them seemed great when I thought of them, but they were quite diverse ideas, whereas now I have a smaller set of reliable ideas that are actually good for structuring code. So my code is probably more similar to itself than it used to be back in the day, when I was trying stuff out. I’ve got more disciplined about encapsulation, I think. There are operational things like I use actors more now than I used to, and that forces me to use immutability more than I used to. The first code that I wrote in Red Gate was the memory profiler UI, and that was an actor, I just didn’t know the name of it at the time. I don’t really use object-orientation. By object-orientation, I mean having n objects of the same type which are mutable. I want a constant number of objects that are mutable, and they should be different types. I stick stuff in dictionaries and then have one thing that owns the dictionary and puts stuff in and out of it. That’s definitely a pattern that I’ve seen recently. I think maybe I’m doing functional programming. Possibly. It’s plausible. If you had to summarise the essence of programming in a pithy sentence, how would you do it? Programming is the form of art that, without losing any of the beauty of architecture or fine art, allows you to produce things that people love and you make money from. So you think it’s an art rather than a science? It’s a little bit of engineering, a smidgeon of maths, but it’s not science. Like architecture, programming is on that boundary between art and engineering. If you want to do it really nicely, it’s mostly art. You can get away with doing architecture and programming entirely by having a good engineering mind, but you’re not going to produce anything nice. You’re not going to have joy doing it if you’re an engineering mind. Architects who are just engineering minds are not going to enjoy their job. I suppose engineering is the foundation on which you build the art. Exactly. How do you think programming is going to change over the next ten years? There will be an unfortunate shift towards dynamically-typed languages, because of JavaScript. JavaScript has an unfair advantage. JavaScript’s unfair advantage will cause more people to be exposed to dynamically-typed languages, which means other dynamically-typed languages crop up and the best features go into dynamically-typed languages. Then people conflate the good features with the fact that it’s dynamically-typed, and more investment goes into dynamically-typed languages. They end up better, so people use them. What about the idea of compiling other languages, possibly statically-typed, to JavaScript? It’s a reasonable idea. I would like to do it, but I don’t think enough people in the world are going to do it to make it pick up. The hordes of beginners are the lifeblood of a language community. They are what makes there be good tools and what makes there be vibrant community websites. And any particular thing which is the same as JavaScript only with extra stuff added to it, although it might be technically great, is not going to have the hordes of beginners. JavaScript is always to be quickest and easiest way for a beginner to start programming in the browser. And dynamically-typed languages are great for beginners. Compilers are pretty scary and beginners don’t write big code. And having your errors come up in the same place, whether they’re statically checkable errors or not, is quite nice for a beginner. If someone asked me to teach them some programming, I’d teach them JavaScript. If dynamically-typed languages are great for beginners, when do you think the benefits of static typing start to kick in? The value of having a statically typed program is in the tools that rely on the static types to produce a smooth IDE experience rather than actually telling me my compile errors. And only once you’re experienced enough a programmer that having a really smooth IDE experience makes a blind bit of difference, does static typing make a blind bit of difference. So it’s not really about size of codebase. If I go and write up a tiny program, I’m still going to get value out of writing it in C# using ReSharper because I’m experienced with C# and ReSharper enough to be able to write code five times faster if I have that help. Any other visions of the future? Nobody’s going to use actors. Because everyone’s going to be running on single-core VMs connected over network-ready protocols like JSON over HTTP. So, parallelism within one operating system is going to die. But until then, you should use actors. More Red Gater Coder interviews

    Read the article

  • Edit Text in a Webpage with Internet Explorer 8

    - by Matthew Guay
    Internet Explorer is often decried as the worst browser for web developers, but IE8 actually offers a very nice set of developer tools.  Here we’ll look at a unique way to use them to edit the text on any webpage. How to edit text in a webpage IE8’s developer tools make it easy to make changes to a webpage and view them directly.  Simply browse to the webpage of your choice, and press the F12 key on your keyboard.  Alternately, you can click the Tools button, and select Developer tools from the list. This opens the developer tools.  To do our editing, we want to select the mouse button on the toolbar “Select Element by Click” tool. Now, click on any spot of the webpage in IE8 that you want to edit.  Here, let’s edit the footer of Google.com.  Notice it places a blue box around any element you hover over to make it easy to choose exactly what you want to edit. In the developer tools window, the element you selected before is now highlighted.  Click the plus button beside that entry if the text you want to edit is not visible.   Now, click the text you wish to change, and enter what you wish in the box.  For fun, we changed the copyright to say “©2010 Microsoft”. Go back to IE to see the changes on the page! You can also change a link on a page this way: Or you can even change the text on a button: Here’s our edited Google.com: This may be fun for playing a trick on someone or simply for a funny screenshot, but it can be very useful, too.  You could test how changes in fontsize would change how a website looks, or see how a button would look with a different label.  It can also be useful when taking screenshots.  For instance, if I want to show a friend how to do something in Gmail but don’t want to reveal my email address, I could edit the text on the top right before I took the screenshot.  Here I changed my Gmail address to [email protected]. Please note that the changes will disappear when you reload the page.  You can save your changes from the developer tools window, though, and reopen the page from your computer if you wish. We have found this trick very helpful at times, and it can be very fun too!  Enjoy it, and let us know how you used it to help you! Similar Articles Productive Geek Tips Edit Webpage Text Areas in Your Favorite Text EditorRemove Webpage Formatting or View the HTML Code When Copying in FirefoxChange the Default Editor From Nano on Ubuntu LinuxShare Text & Images the Easy Way with JustPaste.itEditPad Lite – All Purpose Tabbed Text Editor TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Edit Text in a Webpage with Internet Explorer 8

    - by Matthew Guay
    Internet Explorer is often decried as the worst browser for web developers, but IE8 actually offers a very nice set of developer tools.  Here we’ll look at a unique way to use them to edit the text on any webpage. How to edit text in a webpage IE8’s developer tools make it easy to make changes to a webpage and view them directly.  Simply browse to the webpage of your choice, and press the F12 key on your keyboard.  Alternately, you can click the Tools button, and select Developer tools from the list. This opens the developer tools.  To do our editing, we want to select the mouse button on the toolbar “Select Element by Click” tool. Now, click on any spot of the webpage in IE8 that you want to edit.  Here, let’s edit the footer of Google.com.  Notice it places a blue box around any element you hover over to make it easy to choose exactly what you want to edit. In the developer tools window, the element you selected before is now highlighted.  Click the plus button beside that entry if the text you want to edit is not visible.   Now, click the text you wish to change, and enter what you wish in the box.  For fun, we changed the copyright to say “©2010 Microsoft”. Go back to IE to see the changes on the page! You can also change a link on a page this way: Or you can even change the text on a button: Here’s our edited Google.com: This may be fun for playing a trick on someone or simply for a funny screenshot, but it can be very useful, too.  You could test how changes in fontsize would change how a website looks, or see how a button would look with a different label.  It can also be useful when taking screenshots.  For instance, if I want to show a friend how to do something in Gmail but don’t want to reveal my email address, I could edit the text on the top right before I took the screenshot.  Here I changed my Gmail address to [email protected]. Please note that the changes will disappear when you reload the page.  You can save your changes from the developer tools window, though, and reopen the page from your computer if you wish. We have found this trick very helpful at times, and it can be very fun too!  Enjoy it, and let us know how you used it to help you! Similar Articles Productive Geek Tips Edit Webpage Text Areas in Your Favorite Text EditorRemove Webpage Formatting or View the HTML Code When Copying in FirefoxChange the Default Editor From Nano on Ubuntu LinuxShare Text & Images the Easy Way with JustPaste.itEditPad Lite – All Purpose Tabbed Text Editor TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Create a Customized Tab on the Office 2010 Ribbon

    - by Mysticgeek
    Some MS Office users were put off a bit by the Ribbon feature in 2007 for being cumbersome and confusing. Today we look at a cool new feature in Office 2010 that allows you to create your own custom tabs with specific commands for easier document creation. Create a Customized Tab In our example we’re using Word, but you can create a custom tab in the other Office apps as well. To do so, right-click on the Ribbon and select Customize the Ribbon. The Word Options screen opens up and from here you can manage a lot of customization options. We want to create a new customized tab, so click on the New Tab button.   Now give it a name… Now just drag the commands you want to add from the left column over to your new custom group. You have every command available to choose from. You can select specific groups or all commands from the dropdown menu on the left. That is all there is to it…now you have your own customized tab with the commands you use most often to help you work more efficiently. In this example We didn’t add a whole lot of commands, but you can customize it with as many as you need. You can also create other tabs with different sets of commands too. When you create a customized tab in one application, it’s only going to be in that app. For example if you create on in Word, it’s not going to show in Excel as commands differ between apps. If you want a custom tab in another Office app you’ll need to create one for it. Another very cool thing you can do is export the customizations to use on another machine or pass them to a coworker. To export the customizations, go to the Customize Ribbon section and at the bottom of the right field click Import/Export then Export all customizations. Then save the file to a location on your hard drive.   To import the settings to another machine, go into Ribbon Customizations and select Import customizations file… then browse the the file you exported. You’ll be prompted to confirm you want to import he customizations… After confirming the choice now you’ll see the customization show up on the other machine. This is very handy if you work on several machines throughout the day and want to easily bring your customized tabs with you. If you find yourself using a lot of specific commands throughout the day, creating your own customized tab will help access them more quickly. If you want to test out Office 2010 it’s currently in Public Beta and can be downloaded for free. Download Office 2010 Beta Similar Articles Productive Geek Tips Maximize Space by "Auto-Hiding" the Ribbon in Office 2007Make Learning Office 2007 & 2010 Fun with Ribbon HeroAdd or Remove Apps from the Microsoft Office 2007 or 2010 SuiteHow To Bring Back the Old Menus in Office 2007How To Take Screenshots with Word 2010 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Create a Customized Tab on the Office 2010 Ribbon

    - by Mysticgeek
    Some MS Office users were put off a bit by the Ribbon feature in 2007 for being cumbersome and confusing. Today we look at a cool new feature in Office 2010 that allows you to create your own custom tabs with specific commands for easier document creation. Create a Customized Tab In our example we’re using Word, but you can create a custom tab in the other Office apps as well. To do so, right-click on the Ribbon and select Customize the Ribbon. The Word Options screen opens up and from here you can manage a lot of customization options. We want to create a new customized tab, so click on the New Tab button.   Now give it a name… Now just drag the commands you want to add from the left column over to your new custom group. You have every command available to choose from. You can select specific groups or all commands from the dropdown menu on the left. That is all there is to it…now you have your own customized tab with the commands you use most often to help you work more efficiently. In this example We didn’t add a whole lot of commands, but you can customize it with as many as you need. You can also create other tabs with different sets of commands too. When you create a customized tab in one application, it’s only going to be in that app. For example if you create on in Word, it’s not going to show in Excel as commands differ between apps. If you want a custom tab in another Office app you’ll need to create one for it. Another very cool thing you can do is export the customizations to use on another machine or pass them to a coworker. To export the customizations, go to the Customize Ribbon section and at the bottom of the right field click Import/Export then Export all customizations. Then save the file to a location on your hard drive.   To import the settings to another machine, go into Ribbon Customizations and select Import customizations file… then browse the the file you exported. You’ll be prompted to confirm you want to import he customizations… After confirming the choice now you’ll see the customization show up on the other machine. This is very handy if you work on several machines throughout the day and want to easily bring your customized tabs with you. If you find yourself using a lot of specific commands throughout the day, creating your own customized tab will help access them more quickly. If you want to test out Office 2010 it’s currently in Public Beta and can be downloaded for free. Download Office 2010 Beta Similar Articles Productive Geek Tips Maximize Space by "Auto-Hiding" the Ribbon in Office 2007Make Learning Office 2007 & 2010 Fun with Ribbon HeroAdd or Remove Apps from the Microsoft Office 2007 or 2010 SuiteHow To Bring Back the Old Menus in Office 2007How To Take Screenshots with Word 2010 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • SQLAuthority News – The Best Quotes of “Who Wrote This?” Contest

    - by pinaldave
    I am a frequent reader of Brent Ozar PLF, it is one of my favorite blogs. A recent post announced a “Who Wrote This?” contest to see if readers could tell their three contributors apart based on some writing samples. Here are my favorite lines from the sample paragraphs, from each of the three “mystery authors.” Topic 1: Working with Bad Managers Mystery Author A – “Working with bad managers means working against my own happiness, and I’ve come to learn that there’s no changing bad managers.” I love this line because, as anyone who has had a bad manager knows, often a lot of self-doubt rises up. We all have to remember that sometimes the problem is out of our control. Mystery Author B – “Mentor your manager just like you would mentor a junior DBA.” Having a bad manager can be extremely depressing, and we often feel out of control. But we all need to remember that our work is a two-way street, and that sometimes we can subtly influence those above us. Mystery Author C – “The trick to working for all bad managers is to remember that they aren’t your parent. Take charge of your career.” We all also need to learn not to play the blame game. Would you rather stay in a place where you are unhappy, or would you rather take charge of your life? I hope most people would pick the latter. Topic 2: Working with Remote Teams Mystery Author A – “Like almost anything else the key is to make sure that everyone on the team has an understanding of how and when communication will occur.” Communication is so important. I cannot over emphasize how much. And this one line captures how I feel and even communicates the idea clearly! Mystery Author B – “The key to remote team success is verifiable trust: feeling confident that invisible team members are doing the right amount of the right thing at the right time.” I think this line not only captures the key aspects of remote work – verifiable work and trust – but there were so many lines that followed that I loved and could not fit here. The whole paragraph is a list for successful remote work. Everyone could benefit from reading it. Mystery Author C – “What seems clear, precise, and specific in one time zone comes across as vague, soupy, and just plain weird in another.” You know what? I just love this description. The author is right – sometimes vague e-mails really do seem soupy and weird! Topic 3: Working with Your Nemesis Mystery Author A – “Every job is temporary, but your reputation stays with you.” Everyone needs to remember this. The workplace is meant to be a professional arena, and many people have the opinion that work is temporary and disposable. No one wants to work with co-worker like that. Mystery Author B – “Unhealthy conflict is going to lead to leaving three week old tuna fish sandwiches in someone’s desk drawer.” Sometimes humor really is the best policy! Mystery Author C – “Oh no, it’s that guy.” This might seem like a weird phrase to choose as my favorite from an entire paragraph. But the whole piece was written in the form of a story of co-workers getting drunk and plotting against a nemesis. It was too funny to overlook, but too long to post here. A must read! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Developer Profile: Marcelo Quinta

    - by Tori Wieldt
    As the Java developer community lead for Oracle, the best part of my job is going to conferences and meeting Java developers. I’ve had the pleasure to meet men and women who are smart, fun and passionate about Java—they make the Java community happen. The current issue of Java Magazine provides profiles of other young Java developers around the world. Subscribe to read them! Marcelo Quinta Age: 24Occupation: Professor, Federal University of GoiasLocation: Goias, Brazil Twitter: @mrquinta Marcelo (white polo shirt, center) and class OTN: When did you realize that you were good at programming? When I was in graduate school, I developed a Java system that displayed worked out the logics of getting the maximum coverage using the fewest resources (for example, the minimum number of soldiers [and positions] needed for a battlefield. It may seems not difficult, but it's a hard problem to solve, mathematically. Here I was, a freshman, who came up with an app  "solving" it. Some Master's students use my software today. It was then I began to believe in what I could do.OTN: What most inspires you about programming?I'm really inspired by the challenges and tension that comes from solving a complicated problems. Lately, I've been doing a new system focused on education and digital inclusion and was very gratifying to see it working and the results. I felt useful for the community. OTN: What are some things you would like to accomplish using Java?Java is a very strong platform and that gives us power to develop applications for different devices and purposes, from home automation with little microcontrollers to systems in big servers. I would like to build more systems that integrate the people life or different business contexts, from PCs to cell phones and tablets, ubiquitously. I think IT has reached a level where the current challenge is to make systems that leverage existing technologies that are present in daily life. Java gives us a very interesting set of options to put it into practice, especially in systems that require more strength.OTN: What technical insights into Java technology have been most important to you?I have really enjoyed the way that Java has evolved with Oracle, with new features added, many of them which were suggested by the community. Java 7 came with substantial improvements in the language syntax and it seems that Java 8 takes it even further. I also made some applications in JavaFX and liked the new version. The Java GUI is on a higher level than is offered out there. I saw some JavaFX prototypes running in modern tablets and I got excited. OTN: What would you like to be doing 10 years from now?I want my work to make a difference for individuals or an institution. It would be interesting to be improving one of the systems that I am making today. Recently I've been mixing my hobbies and work, playing with Arduino and home automation. The JHome project, winner of the Duke's Choice Award in 2011, is very interesting to me.OTN: Do you listen to music when you write code? If so, what kind?Absolutely! I usually listen to electronic music (Prodigy, Fatboy Slim and Paul Oakenfold), rock (Metallica, Strokes, The Black Keys) and a bit of local alternative music. I live in Goiânia, "The Brazilian Seattle" and I profit from it very well. OTN: What do you do when you're not programming?I like to play guitar and to fish. Last year I sold my economy car and bought a old jeep. Some people called me crazy, but since then I've been having a great time and having adventures on the backroads of Brazil. Once I broke my glasses in a funny game involving my car's suspension and the airbags. OTN: Does your girlfriend think you are crazy?Crazy is someone who doesn't have courage to do strange things! My girlfriend likes my style. =D Subscribe to the free Java Magazine to read profiles of other young Java developers. Visit the Java channel on YouTube to see a video of Marcelo in action.

    Read the article

  • T-SQL in Chicago – the LobsterPot teams with DataEducation

    - by Rob Farley
    In May, I’ll be in the US. I have board meetings for PASS at the SQLRally event in Dallas, and then I’m going to be spending a bit of time in Chicago. The big news is that while I’m in Chicago (May 14-16), I’m going to teach my “Advanced T-SQL Querying and Reporting: Building Effectiveness” course. This is a course that I’ve been teaching since the 2005 days, and have modified over time for 2008 and 2012. It’s very much my most popular course, and I love teaching it. Let me tell you why. For years, I wrote queries and thought I was good at it. I was a developer. I’d written a lot of C (and other, more fun languages like Prolog and Lisp) at university, and then got into the ‘real world’ and coded in VB, PL/SQL, and so on through to C#, and saw SQL (whichever database system it was) as just a way of getting the data back. I could write a query to return just about whatever data I wanted, and that was good. I was better at it than the people around me, and that helped. (It didn’t help my progression into management, then it just became a frustration, but for the most part, it was good to know that I was good at this particular thing.) But then I discovered the other side of querying – the execution plan. I started to learn about the translation from what I’d written into the plan, and this impacted my query-writing significantly. I look back at the queries I wrote before I understood this, and shudder. I wrote queries that were correct, but often a long way from effective. I’d done query tuning, but had largely done it without considering the plan, just inferring what indexes would help. This is not a performance-tuning course. It’s focused on the T-SQL that you read and write. But performance is a significant and recurring theme. Effective T-SQL has to be about performance – it’s the biggest way that a query becomes effective. There are other aspects too though – such as using constructs better. For example – I can write code that modifies data nicely, but if I haven’t learned about the MERGE statement and the way that it can impact things, I’m missing a few tricks. If you’re going to do this course, a good place to be is the situation I was in a few years before I wrote this course. You’re probably comfortable with writing T-SQL queries. You know how to make a SELECT statement do what you need it to, but feel there has to be a better way. You can write JOINs easily, and understand how to use LEFT JOIN to make sure you don’t filter out rows from the first table, but you’re coding blind. The first module I cover is on Query Execution. Take a look at the Course Outline at Data Education’s website. The first part of the first module is on the components of a SELECT statement (where I make you think harder about GROUP BY than you probably have before), but then we jump straight into Execution Plans. Some stuff on indexes is in there too, as is simplification and SARGability. Some of this is stuff that you may have heard me present on at conferences, but here you have me for three days straight. I’m sure you can imagine that we revisit these topics throughout the rest of the course as well, and you’d be right. In the second and third modules we look at a bunch of other aspects, including some of the T-SQL constructs that lots of people don’t know, and various other things that can help your T-SQL be, well, more effective. I’ve had quite a lot of people do this course and be itching to get back to work even on the first day. That’s not a comment about the jokes I tell, but because people want to look at the queries they run. LobsterPot Solutions is thrilled to be partnering with Data Education to bring this training to Chicago. Visit their website to register for the course. @rob_farley

    Read the article

  • BI&EPM in Focus June 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    General News Thomas Kurian Discusses Oracle Exalytics, SAP HANA (replay | preso | press)  Accenture & Oracle Study: The Challenges of Corporate Financial Reporting  (link) Flash Demo: Oracle Hyperion Planning on Exalytics in the Public Sector (link) Flash Demo: OBIEE & Exalytics in Retail (link) Customers Italian Partner Alfa Sistemi implemented at Autovie Venete S.p.A. Integrates Business Intelligence and Performance Management to Improve Efficiency and Speed for Managing Public Works Projects (English version)  / Autovie Venete implementa un sistema integrato di Business Intelligence e Performance Management per migliorare l’efficienza e la tempestività dell’attività di Controlling di Commessa (Italian version). FANCL Gains 360-Degree View of Customers across Multiple Sales Channels, Reduces Reports by 75% Korea Yakult Improves Profit & Loss Analysis with Oracle Hyperion Planning and OBIEE Hill International Streamlines Forecasting, Improves Visibility into Project Productivity and Profitability Children’s Rights in Society Better Supports Organizational Mission with Advanced, Integrated, and Streamlined Business Intelligence Tools Profit: International utility Enel monitors the performance of global subsidiaries with Oracle Hyperion Applications (link) Profit: Charting a New Course: Korean Air gains altitude by leveraging its greatest asset: information (link)   Events June 12: Breaking Away from the Excel Add-In: Welcome to Hyperion Smart View 11.1.2.2 (link) June 13: Upgrading OBIEE 10g to 11g: Best Practices and Lessons Learned (performance architects) (link) June 14, The Netherlands: Strategies for Business Excellence, New Release of Oracle Hyperion EPM Suite (link) June 21: Comprehensive and Accurate Forecasting for Healthcare (link) June 26: What Exactly is Exalytics? (KPI Partners) (link) Webcast Replay: Is Your Company Able to Navigate Through Market Volatility? (link)  Webcast Replay: Is Hope and Email The Core of Your Reconciliation Process? (link) Webcast Replay: Troubleshooting EPM Reporting & Analysis 11.1.2.x  (link) Webcast Replay: Is your Organization Flying Blind when it comes to Understanding Profitability?  (link) Enterprise Performance Management Final Oracle EPM  Information Panel (CIP) survey on cost, profitability and performance reporting/scorecards is now OPEN (link) New on EPM Blog: What's Going on With IFRS? (link) How does Crystal Ball integrate with EPM Solutions? New collateral and demos on Crystal Ball Solution Factory!  (link) New Youtube Video: Business Case Analysis with Oracle Crystal Ball (link) Crystal Ball 11.1.2.2 is released! Grouped Assumptions in Sensitivity Charts, Data Filtering When Fitting Distributions and Parameter Edits When Fitting Distributions to name a few. Get full details from the online New Features Guide (link) New DRM Oracle-by-Examples now available (link) Support Blog: Hyperion Ledgerlink Sample Record and Windows 7: Now you see it, now you don’t  (link) Use Enterprise Manager FMW Control to Troubleshoot Oracle EPM 11.1.2 Family of Products (link) Business  Intelligence Whitepaper: Real-Time Operational Reporting for E-Business Suite via GoldenGate Replication to an Operational Data Store.  How Oracle enabled real-time operational reporting for its $20B services contract business with Golden Gate & OBIEE (link) KPI Partners ebook: Understanding Oracle BI Components and Repository Modeling Basics (link) “Getting Started with Oracle Endeca Information Discovery” video tutorials now available (link) Oracle BI Publisher Conversion Center: Convert from Crystal, Actuate, or Oracle Reports to Oracle BI Publisher (link) Oracle Fusion Applications: Monthly Partner Updates Webcast Replays to help BI partners understand how OBI, Essbase, BI-Apps and Fusion work together: More on Fusion CRM: Fusion Marketing More on Fusion CRM: Fusion CRM Sales Start-Up Packs and Expert Services for Implementation Partners Introducing the Oracle Fusion Accounting Hub Implementing Fusion Applications using Oracle's Composers Oracle Fusion Applications Co-Existence

    Read the article

  • Get the Picture: Pinterest for Marketers

    - by Mike Stiles
    When trying to determine on which networks to conduct social marketing, the usual suspects immediately rise to the top; Facebook & Twitter, then LinkedIn (especially if you’re B2B), then maybe some Google Plus to hedge SEO bets.  So at what juncture do brands get excited about Pinterest? Pinterest has been easy for marketers to de-prioritize thanks to the perception its usage is so dominated by women. Um, what’s wrong with that? Women make an estimated 85% of all consumer purchases. So if there are indeed over 30 million US women active on it monthly, and they do 92% of the pinning, and 84% are still active on it after 4 years, when did an audience of highly engaged, very likely sales conversions become low priority? Okay, if you’re a tech B2B SaaS product like the Oracle Social Cloud, Pinterest may not be where you focus. But if you operate in the top Pinterest categories, which are truly far-reaching, it’s time to take note of Pinterest’s performance to date: 40.1 million monthly users in the US (eMarketer). Over 30 billion pins, half of which were pinned in the last 6 months. (Big momentum) 75% of usage is on their mobile app. (In solid shape for the mobile migration) Pinterest sharing grew 58% in 2013, beating Facebook, Twitter, or LinkedIn. (ShareThis) Pinterest is the 3rd most popular sharing platform overall (over email), with 48% of all sharing on tablets. Users referred by Pinterest are 10% more likely to buy on e-commerce sites and tend to spend twice that of users coming from Facebook. (Shopify) To be fair, brands haven’t had any paid marketing opportunities on that platform…until recently. Users are seeing Promoted Pins in both category and search feeds from rollout brands like Gap, ABC Family, Ziploc, and Nestle. Are the paid pins annoying users? It seems more so than other social networks, they’re fitting right in to the intended user experience and being accepted, getting almost as many click-throughs as user pins. New York Magazine’s Kevin Roose laid it out succinctly; Pinterest offers a place that’s image-centric, search-friendly, makes things easy to purchase, makes things easy to share, and puts users in an aspirational mood to buy. Pinterest is very confident in the value of that combo and that audience, with CPM rates 5x that of the most expensive Facebook ad, plus (at least for now) required spending commitments and required pin review by Pinterest for quality. The latest developments; a continued move toward search and discovery with enhancements like Guided Search to help you hone in on what interests you, Custom Categories, and the rumored Visual Search that stands to be a liberation from text. And most recently, Pinterest has opened up its API so brands can get access to deeper insights into the best search terms and categories in which to play ball, as well as what kinds of pins stand to perform best in those areas. As we learned in our rundown this week of Social Media Examiner’s Social Media Marketing Industry Report, around 50% of marketers specifically intend on upping their use of Pinterest. If you’re a big believer in fishing where the fish are, that’s probably an efficient position to take. @mikestiles @oraclesocialPhoto: Adam Lambert_Gorwyn, freeimages.com

    Read the article

  • How to Waste Your Marketing Budget

    - by Mike Stiles
    Philosophers have long said if you find out where a man’s money is, you’ll know where his heart is. Find out where money in a marketing budget is allocated, and you’ll know how adaptive and ready that company is for the near future. Marketing spends are an investment. Not unlike buying stock, the money is placed in areas the marketer feels will yield the highest return. Good stock pickers know the lay of the land, the sectors, the companies, and trends. Likewise, good marketers should know the media available to them, their audience, what they like & want, what they want their marketing to achieve…and trends. So what are they doing? And how are they doing? A recent eTail report shows nearly half of retailers planned on focusing on SEO, SEM, and site research technologies in the coming months. On the surface, that’s smart. You want people to find you. And you’re willing to let the SEO tail wag the dog and dictate the quality (or lack thereof) of your content such as blogs to make that happen. So search is prioritized well ahead of social, multi-channel initiatives, email, even mobile - despite the undisputed explosive growth and adoption of it by the public. 13% of retailers plan to focus on online video in the next 3 months. 29% said they’d look at it in 6 months. Buying SEO trickery is easy. Attracting and holding an audience with wanted, relevant content…that’s the hard part. So marketers continue to kick the content can down the road. Pretty risky since content can draw and bind customers to you. Asked to look a year ahead, retailers started thinking about CRM systems, customer segmentation, and loyalty, (again well ahead of online video, social and site personalization). What these investors are missing is social is spreading across every function of the enterprise and will be a part of CRM, personalization, loyalty programs, etc. They’re using social for engagement but not for PR, customer service, and sales. Mistake. Allocations are being made seemingly blind to the trends. Even more peculiar are the results of an analysis Mary Meeker of Kleiner Perkins made. She looked at how much time people spend with media types and how marketers are investing in those media. 26% of media consumption is online, marketers spend 22% of their ad budgets there. 10% of media time is spent with mobile, but marketers are spending 1% of their ad budgets there. 7% of media time is spent with print, but (get this) marketers spend 25% of their ad budgets there. It’s like being on Superman’s Bizarro World. Mary adds that of the online spending, most goes to search while spends on content, even ad content, stayed flat. Stock pickers know to buy low and sell high. It means peering with info in hand into the likely future of a stock and making the investment in it before it peaks. Either marketers aren’t believing the data and trends they’re seeing, or they can’t convince higher-ups to acknowledge change and adjust their portfolios accordingly. Follow @mikestilesImage via stock.xchng

    Read the article

  • Changes in Language Punctuation [closed]

    - by Wes Miller
    More social curiosity than actual programming question... (I got shot for posting this on Stack Overflow. They sent me here. At least i hope here is where they meant.) Based on the few responses I got before the content police ran me off Stack Overflow, I should note that I am legally blind and neatness and consistency in programming are my best friends. A thousand years ago when I took my first programming class (Fortran 66) and a mere 500 years ago when I tokk my first C and C++ classes, there were some pretty standard punctuation practices across languages. I saw them in Basic (shudder), PL/1, PL/AS, Rexx even Pascal. Ok, APL2 is not part of this discussion. Each language has its own peculiar punctuation. Pascal's periods, Fortran's comma separated do loops, almost everybody else's semicolons. As I learned it, each language also has KEYWORDS (if, for, do, while, until, etc.) which are set off by whitespace (or the left margin) if, etc. Each language has function, subroutines of whatever they're called. Some built-in some user coded. They were set off by function_name( parameters );. As in sqrt( x ) or rand( y ); Lately, there seems to be a new set of punctuation rules. Especially in c++ where initializers get glued onto the end of variable declarations int x(0); or auto_ptr p(new gizmo); This usually, briefly fools me into thinking someone is declaring a function prototype or using a function as a integer. Then "if" and 'for' seems to have grown parens; if(true) for(;;), etc. Since when did keywords become functions. I realize some people think they ARE functions with iterators as parameters. But if "for" is a function, where did the arg separating commas go? And finally, functions seem to have shed their parens; sqrt (2) select (...) I know, I koow, loosening whitespace rules is good. Keep reading. Question: when did the old ways disappear and this new way come into vogue? Does anyone besides me find it irritating to read and that the information that the placement of punctuation used to convey is gone? I know full well that K&R put the { at the end of the "if" or "for" to save a byte here and there. Can't use that excuse here. Space as an excuse for loss of readability died as HDD space soared past 100 MiB. Your thoughts are solicited. If there is a good reason to do this, I'll gladly learn it and maybe in another 50 years I'll get used to it. Of course it's good that compilers recognize these (IMHO) typos and keep right on going, but just because you CAN code it that way doesn't mean you HAVE to, right?

    Read the article

  • Spring boot JAR as windows service

    - by roblovelock
    I am trying to wrap a spring boot "uber JAR" with procrun. Running the following works as expected: java -jar my.jar I need my spring boot jar to automatically start on windows boot. The nicest solution for this would be to run the jar as a service (same as a standalone tomcat). When I try to run this I am getting "Commons Daemon procrun failed with exit value: 3" Looking at the spring-boot source it looks as if it uses a custom classloader: https://github.com/spring-projects/spring-boot/blob/master/spring-boot-tools/spring-boot-loader/src/main/java/org/springframework/boot/loader/JarLauncher.java I also get a "ClassNotFoundException" when trying to run my main method directly. java -cp my.jar my.MainClass Is there a method I can use to run my main method in a spring boot jar (not via JarLauncher)? Has anyone successfully integrated spring-boot with procrun? I am aware of http://wrapper.tanukisoftware.com/. However due to their licence I can't use it. UPDATE I have now managed to start the service using procrun. set SERVICE_NAME=MyService set BASE_DIR=C:\MyService\Path set PR_INSTALL=%BASE_DIR%prunsrv.exe REM Service log configuration set PR_LOGPREFIX=%SERVICE_NAME% set PR_LOGPATH=%BASE_DIR% set PR_STDOUTPUT=%BASE_DIR%stdout.txt set PR_STDERROR=%BASE_DIR%stderr.txt set PR_LOGLEVEL=Error REM Path to java installation set PR_JVM=auto set PR_CLASSPATH=%BASE_DIR%%SERVICE_NAME%.jar REM Startup configuration set PR_STARTUP=auto set PR_STARTIMAGE=c:\Program Files\Java\jre7\bin\java.exe set PR_STARTMODE=exe set PR_STARTPARAMS=-jar#%PR_CLASSPATH% REM Shutdown configuration set PR_STOPMODE=java set PR_STOPCLASS=TODO set PR_STOPMETHOD=stop REM JVM configuration set PR_JVMMS=64 set PR_JVMMX=256 REM Install service %PR_INSTALL% //IS//%SERVICE_NAME% I now just need to workout how to stop the service. I am thinking of doing someting with the spring-boot actuator shutdown JMX Bean. What happens when I stop the service at the moment is; windows fails to stop the service (but marks it as stopped), the service is still running (I can browse to localhost), There is no mention of the process in task manager (Not very good! unless I am being blind).

    Read the article

  • Ext JS Tab Panel - Dynamic Tabs - Tab Exists Not Working

    - by Joey Ezekiel
    Hi Would appreciate if somebody could help me on this. I have a Tree Panel whose nodes when clicked load a tab into a tab panel. The tabs are loading alright, but my problem is duplication. I need to check if a tab exists before adding it to the tab panel. I cant seem to have this resolved and it is eating my brains. This is pretty simple and I have checked stackoverflow and the EXT JS Forums for solutions but they dont seem to work for me or I'm being blind. This is my code for the tree: var opstree = new Ext.tree.TreePanel({ renderTo: 'opstree', border:false, width: 250, height: 'auto', useArrows: false, animate: true, autoScroll: true, dataUrl: 'libs/tree-data.json', root: { nodeType: 'async', text: 'Tool Actions' }, listeners: { render: function() { this.getRootNode().expand(); } } }) opstree.on('click', function(n){ var sn = this.selModel.selNode || {}; // selNode is null on initial selection renderPage(n.id); }); function renderPage(tabId) { var TabPanel = Ext.getCmp('content-tab-panel'); var tab = TabPanel.getItem(tabId); //Ext.MessageBox.alert('TabGet',tab); if(tab){ TabPanel.setActiveTab(tabId); } else{ TabPanel.add({ title: tabId, html: 'Tab Body ' + (tabId) + '', closable:true }).show(); TabPanel.doLayout(); } } }); and this is the code for the Tab Panel new Ext.TabPanel({ id:'content-tab-panel', region: 'center', deferredRender: false, enableTabScroll:true, activeTab: 0, items: [{ contentEl: 'about', title: 'About the Billing Ops Application', closable: true, autoScroll: true, margins: '0 0 0 0' },{ contentEl: 'welcomescreen', title: 'PBRT Application Home', closable: false, autoScroll: true, margins: '0 0 0 0' }] }) Can somebody please help?

    Read the article

  • Setting values and display Text in Android Spinner

    - by kaibuki
    Hi, I need help in setting up value and display text in spinner. as per now I am populating my spinner by array adapter e.g mySpinner.setAdapter(myAdapter); and as far as I know after doing this the display text and the value of spinner at same position is same. The other attribute that I can get from spinner is the position on the item. now in my case I want to make spinner like the drop down box, which we have in .NET. which holds a text and value. where as text is displayed and value is at back end. so if I change drop down box , I can either use its selected text or value. but its not happening in android spinner case. For Example: Text Value Cat 10 Mountain 5 Stone 9 Fish 14 River 13 Loin 17 so from above array I am only displaying non-living objects text, and what i want is that when user select them I get there value i.e. like when Mountain selected i get 5 I hope this example made my question a bit more clear... thankx

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18  | Next Page >