Search Results

Search found 16924 results on 677 pages for 'oracle technical'.

Page 647/677 | < Previous Page | 643 644 645 646 647 648 649 650 651 652 653 654  | Next Page >

  • News From EAP Testing

    - by Fatherjack
    There is a phrase that goes something like “Watch the pennies and the pounds/dollars will take care of themselves”, meaning that if you pay attention to the small things then the larger things are going to fare well too. I am lucky enough to be a Friend of Red Gate and once in a while I get told about new features in their tools and have a test copy of the software to trial. I got one of those emails a week or so ago and I have been exploring the SQL Prompt 6 EAP since then. One really useful feature of long standing in SQL Prompt is the idea of a code snippet that is automatically pasted into the SSMS editor when you type a few key letters. For example I can type “ssf” and then press the tab key and the text is expanded to SELECT * FROM. There are lots of these combinations and it is possible to create your own really easily. To create your own you use the Snippet Manager interface to define the shortcut letters and the code that you want to have put in their place. Let’s look at an example. Say I am writing a blog about something and want to have the demo code create a temporary table. It might looks like this; The first time you run the code everything is fine, a lovely set of dates fill the results grid but run it a second time and this happens.   Yep, we didn’t destroy the temporary table so the CREATE statement fails when it finds the table already exists. No matter, I have a snippet created that takes care of this.   Nothing too technical here but you will see that in the Code section there is $CURSOR$, this isn’t a TSQL keyword but a marker for SQL Prompt to place the cursor in that position when the Code is pasted into the SSMS Editor. I just place my cursor above the CREATE statement and type “ifobj” – the shortcut for my code to DROP the temporary table – which has been defined in the Snippet Manager as below. This means I am right-away ready to type the name of the offending table. Pretty neat and it’s been very useful in saving me lots of time over many years.   The news for SQL Prompt 6 is that Red Gate have added a new Snippet Command of $PASTE$. Let’s alter our snippet to the following and try it out   Once again, we will type type “ifobj” in the SSMS Editor but first of all, highlight the name of the table #TestTable and copy it to your clipboard. Now type “ifobj” and press Tab… Wherever the string $PASTE$ is placed in the snippet, the contents of your clipboard are merged into the pasted TSQL. This means I don’t need to type the table name into the code snippet, it’s already there and I am seeing a fully functioning piece of TSQL ready to run. This means it is it even easier to write TSQL quickly and consistently. Attention to detail like this from Red Gate means that their developer tools stay on track to keep winning awards year after year and help take the hard work out of writing neat, accurate TSQL. If you want to try out SQL Prompt all the details are at http://www.red-gate.com/products/sql-development/sql-prompt/.

    Read the article

  • WiX, MSDeploy and an appealing configuration/deployment paradigm

    - by alexhildyard
    I do a lot of application and server configuration; I've done this for many years and have tended to view the complexity of this strictly in terms of the complexity of the ultimate configuration to be deployed. For example, specific APIs aside, I would tend to regard installing a server certificate as a more complex activity than, say, copying a file or adding a Registry entry.My prejudice revolved around the idea of a sequential deployment script that not only had the explicit prescription to apply a specific server configuration, but also made the implicit presumption that the server in question was in a good known state. Scripts like this fail for hundreds of reasons -- the Default Website didn't exist; the application had already been deployed; the application had already been partially deployed and failed to rollback fully, and so on. And so the problem is that the more complex the configuration activity, the more scope for error in any individual part of that activity, and therefore the greater the chance the server in question will not end up at exactly the desired configuration level.Recently I was introduced to a completely different mindset, which, for want of a better turn of phrase, I will call the "make it so" mindset. It's extremely simple both to explain and to implement. In place of the head-down, imperative script you used to use, you substitute a set of checks -- much like exception handlers -- around each configuration activity, starting with a check of the current system state. Thus the configuration logic becomes: "IF these services aren't started then start them, and IF XYZ website doesn't exist then create it, and IF these shares don't exist then create them, and IF these shares aren't permissioned in some particular way, then permission them so." This works. Really well, in my experience. Scenario 1: You want to get a system into a good known state; it's already in a good known state; you quickly realise there is nothing to do.Scenario 2: You want to get the system into a good known state; your script is flawed or the system is bust; it cannot be put into that state. You know exactly where (at least part of) the problem is and why.Scenario 3: You want to get the system into a good known state; people are fiddling around with the system just now. That's fine. You do what you can, and later you come back and try it againScenario 4: No one wants to deploy anything; they want you to prove that the previous deployment was successful. So you re-run the deployment script with the "-WhatIf" flag. It reports that there was nothing to change. There's your proof.I mentioned two technologies in the title -- MSI and MSDeploy. I am thinking specifically of the conversation that took place here. Having worked with both technologies, I think Rob Mensching's response is appropriately nuanced, and in essence the difference is this: sometimes your target is either to achieve a specific new server state, or to rollback to a known good one. Then again, your target may be to configure what you can, and to understand what you can't. Implicitly MSDeploy's "rollback" is simply to redeploy the previous version, whereas a well-crafted MSI will actively put your system into that state without further intervention. Either way, if all goes well it will leave you with a system in one of two states, whereas MSDeploy could leave your system in one of many states. The key is that MSDeploy and MSI are complementary technologies; which suits you best depends as much on Operational guidance as your Configuration remit.What I wanted to say was that I have always been for atomic, transactional-based configuration, but having worked with the "make it so" paradigm, I have been favourably impressed by the actual results. I'm tempted to put a more technical post up on this in due course.

    Read the article

  • Customer owes me half my payment. Should I take ownership of his AWS account for charging? How?

    - by Cawas
    Background They paid me my first half (back in April 15th) before even we could get into an agreement. Very nice of him! Then I've finished the 2 weeks job of setting up the servers, using his AWS credentials he had just bought. I waited for another 2 weeks for everything settling up, and it was all running fine. He did what he needed with his sftp account, everyone were happy. Now, it has been almost 2 months since I've finished the job and I still didn't get the 2nd half. I must assume, it's not much money (about U$400, converted), but it would help me pay the bills at least. Heck, the Amazon bills they are paying are little less than that (for now). Measures I'm wondering how I can go to charge him now. First thought, of course, would be taking everything down and say "pay now, or be doomed". If that's not good enough, then I lost it. I have no contracts and I doubt I could get a law suit in this country for such a low value based only on emails. And I don't really want to get too agressive here - there might be a business chance in the future and I don't want to ruin it. Second though would be just changing the password. But then he probably could gain access again by some recovery means. That's where my question may mainly relay. How can I do it and not leaving any room for recovery from his side? I even got the first AWS "your account was created" mail from himself, showing me I could begin my job, back then. Lastly, do you have any other idea on what I can and what I should do in this case? Responding to Answers Please, consider reading the current answers and comments. This is not a very simple case. I've considered many, many options (including all lawful ones) before considering this ones I've listed here, and I am willing to take the loss and all that. That's not the point. The point is being practical here. I will call him again and talk about it. I will do terrorism on getting lawyers and getting contract. I am ready to go all forth while I have time and energy for it. But, in practice, there is this extra thing I can do to assure myself of the work I've done. I can basically take it back and delete everything! I'd only take his password because I can find no other way to do it within Amazon. Maybe, contacting Amazon and explaining the situation? I don't know. Give me ideas on this technical side! And thank everyone for the attention and helping me clarifying the issue so far! :)

    Read the article

  • Test your internet connection - Emtel Fixed Broadband

    Already at the begin of April, I had a phone conversation with my representative at Emtel Ltd. about some upcoming issues due to the ongoing construction work in my neighbourhood. Unfortunately, they finally raised the house two levels above ours, and of course this has to have a negative impact on the visibility between the WiMAX outdoor unit on the roof and the aimed access point at Medine. So, today I had a technical team here to do a site survey and to come up with potential solutions. Short version: It doesn't look good after all. The site survey Well, the two technicians did their work properly, even re-arranged the antenna to check the connection with another end point down at La Preneuse. But no improvements. Looks like we are out of luck since the construction next door hasn't finished yet and at the moment, it even looks like they are planning to put at least one more level on top. I really wonder about the sanity of the responsible bodies at the local district council. But that's another story. Anyway, the outdoor unit was once again pointed towards Medine and properly fixed with new cable guides (air from the sea and rust...). Both of them did a good job and fine-tuned the reception signal to a mere 3 over 9; compared to the original 7 over 9 I had before the daily terror started. The site survey has been done, and now it's up to Emtel to come up with (better) solutions. Well, I wouldn't mind to have an unlimited, symmetric 3G/UMTS or even LTE connection. Let's see what they can do... Testing the connection There are several online sites available which offer you to check certain aspects of your internet connection. Personally, I'm used to speedtest.net and it works very well. I think it is good and necessary to check your connection from time to time, and only a couple of days ago, I posted the following on Emtel's wall at Facebook (21.05.2013 - 14:06 hrs): Dear Emtel, could you eventually provide an answer on the miserable results of SpeedTest? I chose Rose Hill (Hosted by Emtel Ltd.) as testing endpoint... Sadly, no response to this. Seems that the marketing department is not willing to deal with customers on Facebook. Okay, over at speedtest.net you can use their Flash-based test suite to check your connection to quite a number of servers of different providers world-wide. It's actually very interesting to see the results for different end points and to compare them to each other. The results Following are the results of Rose Hill (hosted by Emtel) and respectively Frankfurt, Germany (hosted by Vodafone DE): Speedtest.net result of 30.05.2013 between Flic en Flac and Rose Hill, Mauritius (Emtel - Fixed Broadband) Speedtest.net result of 30.05.2013 between Flic en Flac and Frankfurt, Germany (Emtel - Fixed Broadband) Luckily, the results are quite similar in terms of connection speed; which is good. I'm currently on a WiMAX tariff called 'Classic Browsing 2', or Fixed Broadband as they call it now, which provides a symmetric line of 768 Kbps (or roughly 0.75 Mbps). In terms of downloads or uploads this means that I would be able to transfer files in either direction with approximately 96 KB/s. Frankly speaking, thanks to compression, my choice of browser and operating system I usually exceed this value and I have download rates up to 120 KB/s - not too bad after all. Only the ping times are a little bit of concern. Due to the difference in distance, or better said based on the number of hubs between the endpoints, they indicate the amount of time that it takes to send a package from your machine to the remote server and get a response back. A lower value is better, and usually the ping is less than 300 ms between Mauritius and Europe. The alternatives in Mauritius Not sure whether I should note this done because for my requirements there are no alternatives to Emtel WiMAX at the moment. It would be great to have your opinion on the situation of internet connectivity in Mauritius. Are there really alternatives? And if so, what are the conditions?

    Read the article

  • My 2011 Professional Development Goals

    - by kerry
    I thought it might be a good idea to post some professional goals for 2011.  Hopefully, I can look at this list at the end of the year and have accomplished most of them. Release an Android app to the marketplace – I figured I would put this first because I have one that I have been working on for a while and it is about ready.  Along with this, I would like to start another one and continue to develop my Android skills. Contribute free software to the community – Again, I have an SMF plugin that will fill this requirement nicely.  Just need to give it some polish and release it.  That’s not all, I would like to add a few more libraries on github, or possibly contribute to an open source project. Regularly attend a user group meetings outside of Java – A great way to meet people and learn new things. Obtain the Oracle Certified Web Developer Certification – I got the SCJP a few years ago and would like to obtain another one.  One step closer to Certified Enterprise Architect. Learn scala – As a language geek, I like to stick to the Pragmatic Programmer’s ‘learn a new language every year’ rule (last year was Ruby).  Scala presents some new concepts all wrapped in a JVM-based OOP language.  Time to dig in. Write an app using JSF – New JEE 6 features are pretty slick.  I want to really leverage them in an app. Present at a user group meeting – Last but not least, I would like to improve my public speaking and skills in presenting.  Also, is a great reason to dig in to some latest and greatest tech. Use git more, and more effectively – Trying to move all my personal projects from Subversion to Git. That’s it.  A little daunting, but I am confident I can at least touch on most of these and it’s a great roadmap to my professional development.

    Read the article

  • The partition table is corrupt

    - by Tim
    I have a corrupt the partition table on the laptop that is running Ubunutu 10.4. Before the partition table was corrupt I had the following partitions: 2 primary partitions: 1st - NTFS 2nd - Extended 4 logical partitons that are built within 2nd extended: 1st NTFS (68 Gib) 2nd Linux (19 Gib) 3rd Swap (1.4 Gib) 4th Linux (24 Gib) The physical order of these partitions was the following: ( 4th Linux ) - ( 1st NTFS ) - ( 2nd Linux ) - ( 3rd Swap ) The logical order of the partition was different: ( 1st NTFS ) - ( 2nd Linux ) - ( 3rd Swap ) ( 4th Linux ) NTFS partition was big and it resided between 2 Linux partitions, neither of these partitions had enough space to install Oracle 11g. Therefore, I decided to a) either move the NTFS partion to the left or b) remove it completely and extend partition where Linux resides. As I tool I have chosen GParted. But unfortunately it was not able to move the partition because he found that in NTFS partition there are some blocks that are referenced multiple times. Also it was not able to remove the partition neither, because in this case the partitions that follow it ( 2nd Linux ) - ( 3rd Swap ) have to be in his opinion also removed, because the organization of extended partition is a linked list. Since GParted was not able to do such thing I was trying to find another tool. I found diskdrake tool on PSLinuxOS distribution of linux. That tool silently deleted ( 1st NTFS ) partition and I thought that everything was fine. But diskdrake has damaged the partition in a way that I am not able either to boot from the hard disk nor to see the partitions with GParted and even with diskdrake itself! Fortunately I have a live CD of Ubuntu 8.10 and I am able to boot and see hard disk. I have 2 ideas how I can solve the problem: 1) Manually change disk partitions and point them to the correct partitions. 2) Create partition table with GParted that as much as possible is the same with the previous one I find the 2nd approach less time consuming but some data will be lost because of it is not possible to place borders of the partitions exactly how it was before. And moreover I am not sure if such approach would work, for example, if the OS is able to locate files after repartitioning. I feel like that it will but not 100% sure. Are there some ideas how the problem may be solved?

    Read the article

  • There are 2 jobs available - which one sounds better all round [closed]

    - by Steve Gates
    I am currently employed at a company where we scrape by each year breaking even, sometimes having a little profit. The development environment is very relaxed and we have a laugh. My colleagues are not interested in improving their knowledge unless they have to, so trying to get them to adopt things like TDD is a non-starter. My development manager is stuck in .Net 2 land and refuses to use things like LINQ. He over complicates architecture and writes very unreadable code, heres an example SortedList<int,<SortedList<int,SortedList<int, MyClass>>>> The MD of the company has no drive and lets the one sales guy bring in the contracts. We are not busy all the time and this allows me time to look at new technology and learn. In terms of using things like TDD, my development manager has no problem with it and can kind of see the purpose of it, he just wont use it himself. This means I am alone in learning new things and am often resorting to StackOverflow to make sure I get things right. The company has a lot of flexibility, I can work from home if needs be and when my daughter was born they let me work from home 1 day a week however they expect this flexibility in return often asking me to travel occasionally on a Friday afternoon for the following week. Sometimes its abroad. We are also pretty much on call 24/5 as we have engineers in various countries. Also we have no testers so most of the testing is done by us developers and some testing by engineers. Either way no-one likes testing! I have been offered a role at a company I worked at 5 years ago. They were quite Victorian in their working practices but it appears to have relaxed now although I suspect still reasonably formal. There is a new team of developers I don't know and they are about to move to new offices. The team lead is a guy that was there when I was and I get the impression he takes his role seriously and likes his formal procedures and documentation. I think some of the Victorian practices may have rubbed off on him. However he did say if things crop up then as long as I can trust the person they can work at home although he prefers people in the office. The team uses SCRUM, TDD and SOLID design principles so they are quite up to date in technology. They are reasonably Microsoft focused. It appears the Technical Director might be the R&D man and research new technology on his own not allowing developers to play with new technology. He possibly might be a super developer and makes all the decisions that no can argue with. They are currently moving to Entity Framework away from NHibernate based on issues that their queries seem to fail sometimes and they feel NHibernate is stagnant. They have analysts and a QA team. The MD is focused and they are an expanding company making profit each year. I'm not sure what the team morale is and whether they have a laugh. When I had a tour around the office they were there in dead silence. I'm really unsure which role is the best for me and going with my gut instinct is useless as I'm not sure what my gut is telling me. Based on the information above which role would you choose and why?

    Read the article

  • Bowing to User Experience

    As a consumer of geeky news it is hard to check my Google Reader without running into two or three posts about Apples iPad and in particular the changes to the developer guidelines which seemingly restrict developers to using Apples Xcode tool and Objective-C language for iPad apps. One of the alternatives to Objective-C affected, is MonoTouch, an option with some appeal to me as it is based on the Mono implementation of C#. Seemingly restricted is the key word here, as far as I can tell, no official announcement has been made about its fate. For more details around MonoTouch for iPhone OS, check out Miguel de Icazas post: http://tirania.org/blog/archive/2010/Apr-28.html. These restrictions have provoked some outrage as the perception is that Apple is arrogantly restricting developers freedom to create applications as they choose and perhaps unwittingly shortchanging iPhone/iPad users who wont benefit from these now never-to-be-made great applications. Apples response has mostly been to say they are concentrating on providing a certain user experience to their customers, and to do this, they insist everyone uses the tools they approve. Which isnt a surprising line of reasoning given Apple restricts the hardware used and content of the apps already. The vogue term for this approach is curated, as in a benevolent museum director selecting only the finest artifacts for display or a wise gardener arranging the plants in a garden just so. If this is what a curated experience is like it is hard to argue that consumers are not responding. My iPhone is probably the most satisfying piece of technology I own. Coming from the Razr, it really was an revolution in how the form factor, interface and user experience all tied together. While the curated approach reinvented the smart phone genre, it is easy to forget that this is not a new approach for Apple. Macbooks and Macs are Apple hardware that run Apple software. And theyve been successful, but not quite in the same way as the iPhone or iPad (based on early indications). Why not? Well a curated approach can only be wildly successful if the curator a) makes the right choices and b) offers choices that no one else has. Although its advantages are eroding, the iPhone was different from other phones, a unique, focused, touch-centric experience. The iPad is an attempt to define another category of computing. Macs and Macbooks are great devices, but are not fundamentally a different user experience than a PC, you still have windows, file folders, mouse and keyboard, and similar applications. So the big question for Apple is can they hold on to their market advantage, continuing innovating in user experience and stay on top? Or are they going be like Xerox, and the rest of the world says thank you for the windows metaphor, now let me implement that better? It will be exciting to watch, with Android already a viable competitor and Microsoft readying Windows Phone 7. And to close the loop back to the restrictions on developing for iPhone OS. At this point the main target appears to be Adobe and Adobe Flash. Apples calculation is that a) they dont need those developers or b) the developers they want will learn Apples stuff anyway. My guess is that they are correct; that as much as I like the idea of developers having more options, I am not going to buy a competitors product to spite Apple unless that product is just as usable. For a non-technical consumer, I dont know that this conversation even factors into the buying decision. If it did, wed be talking about how Microsoft is trying to retake a slice of market share from the behemoth that is Linux.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • The best computer ever

    - by Jeff
    (This is a repost from my personal blog… wow… I need to write more technical stuff!) About three years and three months ago, I bought a 17" MacBook Pro, and it turned out to be the best computer I've ever owned. You might think that every computer with better specs is automatically better than the last, but that hasn't been my experience. My first one was a Sony, back in the Pentium III days, and it cost an astonishing $2,500. That was even more ridiculous in 1999 dollars. It had a dial-up modem, and a CD-ROM, built-in! It may have even played DVD's. A few years later I bought an HP, and it ended up being a pile of shit. The power connector inside came loose from the board, and on occasion would even short. In 2005, I bought a Dell, and it wasn't bad. It had a really high resolution screen (complete with dead pixels, a problem in those days), and it was the first laptop I felt I could do real work on. When 2006 rolled around, Apple started making computers with Intel CPU's, and I bought the very first one the week it came out. I used Boot Camp to run Windows. I still have it in its box somewhere, and I used it for three years. The current 17" was new in 2009. The goodness was largely rooted in having a big screen with lots of dots. This computer has been the source of hundreds of blog posts, tens of thousands of lines of code, video and photo editing, and of course, a whole lot of Web surfing. It connected to corpnet at Microsoft, WiFi in Hawaii and has presented many a deck. It has traveled with me tens of thousands of miles. Last year, I put a solid state drive in it, and it was like getting a new computer. I can boot up a Windows 7 VM in about 19 seconds. Having 8 gigs of RAM has always been fantastic. Everything about it has been fast and fun. When new, the battery (when not using VM's) could get as much as 10 hours. I can still do 7 without much trouble. After 460 charge cycles, the battery health is still between 85 and 90%. The only real negative has been the size and weight. It's only an inch thick, but naturally it's pretty big with a 17" screen. You don't get battery life like that without a huge battery, either, so it's heavy. It was never a deal breaker, but sometimes a long haul across a large airport, you know you're carrying it. Today, Apple announced a new, thinner and lighter 15" laptop, with twice the RAM and CPU cores, and four times the screen resolution. It basically handles my size and weight issues while retaining the resolution, and it still costs less than my 17" did. So I ordered one. Three years is an excellent run, but I kind of budgeted for a new workhorse this year anyway. So if you're interested in a 17" MacBook Pro with a Core 2 Duo 2.66 GHz CPU, 8 gigs of RAM and a 320 gig hard drive (sorry, I'm keeping the SSD), I have one to sell. They've apparently discontinued the 17", which is going to piss off the video community. It's in excellent condition, with a few minor scratches, but I take care of my stuff.

    Read the article

  • Getting Started with Cloud Computing

    - by juanlarios
    You’ve likely heard about how Office 365 and Windows Intune are great applications to get you started with Cloud Computing. Many of you emailed me asking for more info on what Cloud Computing is, including the distinction between "Public Cloud" and "Private Cloud". I want to address these questions and help you get started. Let's begin with a brief set of definitions and some places to find more info; however, an excellent place where you can always learn more about Cloud Computing is the Microsoft Virtual Academy. Public Cloud computing means that the infrastructure to run and manage the applications users are taking advantage of is run by someone else and not you. In other words, you do not buy the hardware or software to run your email or other services being used in your organization – that is done by someone else. Users simply connect to these services from their computers and you pay a monthly subscription fee for each user that is taking advantage of the service. Examples of Public Cloud services include Office 365, Windows Intune, Microsoft Dynamics CRM Online, Hotmail, and others. Private Cloud computing generally means that the hardware and software to run services used by your organization is run on your premises, with the ability for business groups to self-provision the services they need based on rules established by the IT department. Generally, Private Cloud implementations today are found in larger organizations but they are also viable for small and medium-sized businesses since they generally allow an automation of services and reduction in IT workloads when properly implemented. Having the right management tools, like System Center 2012, to implement and operate Private Cloud is important in order to be successful. So – how do you get started? The first step is to determine what makes the most sense to your organization. The nice thing is that you do not need to pick Public or Private Cloud – you can use elements of both where it makes sense for your business – the choice is yours. When you are ready to try and purchase Public Cloud technologies, the Microsoft Volume Licensing web site is a good place to find links to each of the online services. In particular, if you are interested in a trial for each service, you can visit the following pages: Office 365, CRM Online, Windows Intune, and Windows Azure. For Private Cloud technologies, start with some of the courses on Microsoft Virtual Academy and then download and install the Microsoft Private Cloud technologies including Windows Server 2008 R2 Hyper-V and System Center 2012 in your own environment and take it for a spin. Also, keep up to date with the Canadian IT Pro blog to learn about events Microsoft is delivering such as the IT Virtualization Boot Camps and more to get you started with these technologies hands on. Finally, I want to ask for your help to allow the team at Microsoft to continue to provide you what you need. Twice a year through something we call "The Global Relationship Study" – they reach out and contact you to see how they're doing and what Microsoft could do better. If you get an email from "Microsoft Feedback" with the subject line "Help Microsoft Focus on Customers and Partners" between March 5th and April 13th, please take a little time to tell them what you think. Cloud Computing Resources: Microsoft Server and Cloud Computing site – information on Microsoft's overall cloud strategy and products. Microsoft Virtual Academy – for free online training to help improve your IT skillset. Office 365 Trial/Info page – get more information or try it out for yourself. Office 365 Videos – see how businesses like yours have used Office 365 to transition to the cloud. Windows Intune Trial/Info – get more information or try it out for yourself. Microsoft Dynamics CRM Online page – information on trying and licensing Microsoft Dynamics CRM Online. Additional Resources You May Find Useful: Springboard Series Your destination for technical resources, free tools and expert guidance to ease the deployment and management of your Windows-based client infrastructure. TechNet Evaluation Center Try some of our latest Microsoft products for free, Like System Center 2012 Pre-Release Products, and evaluate them before you buy. AlignIT Manager Tech Talk Series A monthly streamed video series with a range of topics for both infrastructure and development managers. Ask questions and participate real-time or watch the on-demand recording. Tech·Days Online Discover what's next in technology and innovation with Tech·Days session recordings, hands-on labs and Tech·Days TV.

    Read the article

  • Keeping your options open in a cloud solution

    - by BuckWoody
    In on-premises solutions we have the full range of options open for a given computing solution – but we don’t always take advantage of them, for multiple reasons. Data goes in a Relational Database Management System, files go on a share, and e-mail goes to the Exchange server. Over time, vendors (including ourselves) add in functionality to one product that allow non-standard use of the platform. For example, SQL Server (and Oracle, and others) allow large binary storage in or through the system – something not originally intended for an RDBMS to handle. There are certainly times when this makes sense, of course, but often these platform hammers turn every problem into a nail. It can make us “lazy” in our design – we sometimes don’t take the time to learn another architecture because the one we’ve spent so much time with can handle what we want to do. But there’s a distinct danger here. In nature, when a population shares too many of the same traits, it can cause a complete collapse if a situation exploits a weakness shared by that population. The same is true with not using the righttool for the job in a computing environment. Your company or organization depends on your knowledge as a professional to select the best mix of supportable, flexible, cost-effective technologies to solve their problems, whether you’re in an architect role or not.  So take some time today to learn something new. The way I do this is to select a given problem, and try to solve it with a technology I’m not familiar with. For instance – create a Purchase Order system in Excel, then in Hadoop or MongoDB, or even in flat-files using PowerShell as an interface. No, I’m not suggesting any of these architectures are the proper way to solve the PO problem, but taking something concrete that you know well and applying that meta-knowledge to another platform will assist you in exercising the “little grey cells” and help you and your organization understand what is open to you. And of course you can do all of this on-premises – but my recommendation is to check out a cloud platform (my suggestion would of course be Windows Azure :) ) and try it there. Most providers (including Microsoft) provide free time to do that.

    Read the article

  • High Profile ASP.NET websites

    - by nandos
    About twice a month I get asked to justify the reason "Why are we using ASP.NET and not PHP or Java, or buzz-word-of-the-month-here, etc". 100% of the time the questions come from people that do not understand anything about technology. People that would not know the difference between FTP and HTTP. The best approach I found (so far) to justify it to people without getting into technical details is to just say "XXX website uses it". Which I get back "Oh...I did not know that, so ASP.NET must be good". I know, I know, it hurts. But it works. So, without getting into the merit of why I'm using ASP.NET (which could trigger an endless argument for other platforms), I'm trying to compile a list of high profile websites that are implemented in ASP.NET. (No, they would have no idea what StackOverflow is). Can you name a high-profile website implemented in ASP.NET? EDIT: Current list (thanks for all the responses): (trying to avoid tech sites and prioritizing retail sites) Costco - http://www.costco.com/ Crate & Barrel - http://www.crateandbarrel.com/ Home Shopping Network - http://www.hsn.com/ Buy.com - http://www.buy.com/ Dell - http://www.dell.com Nasdaq - http://www.nasdaq.com/ Virgin - http://www.virgin.com/ 7-Eleven - http://www.7-eleven.com/ Carnival Cruise Lines - http://www.carnival.com/ L'Oreal - http://www.loreal.com/ The White House - http://www.whitehouse.gov/ Remax - http://www.remax.com/ Monster Jobs - http://www.monster.com/ USA Today - http://www.usatoday.com/ ComputerJobs.com - http://computerjobs.com/ Match.com - http://www.match.com National Health Services (UK) - http://www.nhs.uk/ CarrerBuilder.com - http://www.careerbuilder.com/

    Read the article

  • TFS Build errors TF224003, TF215085, TF215076

    - by iamdudley
    Hi, I am using TFS2008 and VS2008. I run nightly builds for about 20 applications using one build agent and the builds are scheduled for either 1am or 2am. Most of the build succeed, however 6 of them fail regularly with similar errors. The errors are either the first two below, or the third one by itself: TF215085: An error occurred while connecting to agent \xxxx\BUILDMACHINE: TF215076: Team Foundation Build on computer BUILDMACHINE (port 9191) is not responding. (Detail Message: The request was aborted: The operation has timed out.) 11/04/2010 2:10:10 AM TF224003: An exception occurred on the build computer BUILDMACHINE: The build (vstfs:///Build/Build/2632) has already completed and cannot be started again.. TF215085: An error occurred while connecting to agent \yyyyy\BA_WKSTFSBUILD: Team Foundation services are not available from server srvtfs. Technical information (for administrator): The operation has timed out It looks to me like some kind of communication error, maybe the port gets over loaded - can this happen? Should I spread the builds out a bit more? In the build definition it says "Queue the build on the default build agent at", so I figured if I scheduled them to start at the same time they would be queued and occur sequentially. Most of the suggestions I've found online for these errors are for all or nothing scenarios where no builds work at all whereas my problem is most build but some consistently do not. Judging by the dates of the last successful builds of these 6 failing builds I believe it is the same 6 failing every night. (I'm editing the build definitions now to keep the failed builds so I can get some more info on the problem) Any help on this would be much appreciated. James.

    Read the article

  • UISplitViewController and complex view heirarchy

    - by Jasconius
    I'm doing an iPad tech demo and I'm running into a serious technical problem. I have an app concept that leverages UISplitViewController, but NOT as the primary controller for the entire app. The app flow could be described roughly as this: Home screen (UIViewController) List-Detail "Catalog" (UISplitViewController) Super Detail Screen (UIViewController but could conceivable also be a child of SplitView). The problem is in the flow between Home and Catalog. Once a UISplitViewController view is added to the UIWindow, it starts to throw hissy fits. The problem can be summarized at this: When a UISplitView generates a popover view, it appears to then be latched to its parent view. Upon removing the UISplitView from the UIWindow subviews, you will get a CoreGraphics exception and the view will fail to be removed. When adding other views (presumably in this case, the home screen to which you are returning), they do not autorotate, instead, the UISplitView, which has failed to be removed due to a CG exception, continues to respond to the rotation instead, causing horrible rendering bugs that can't be just "dealt with". At this point, adding any views, even re-adding the SplitView, causes a cascade of render bugs. I then tried simply to leave the SplitView ever present as the "bottom" view, and keeping adding and removing the Home Screen from on top of it, but this fails as SplitView dominates the Orientation change calls, and Home Screen will not rotate, even if you call [homeScreen becomeFirstResponder] You can't put SplitView into a hierarchy like UINavigationController, you will get an outright runtime error, so that option is off the table. Modals just look bad and are discourages anyway. My presumption at this moment is that the only proper way to deal with this problem is so somehow "disarm" UISplitViewController so that it can be removed from its parent view without throwing an unhandled exception, but I have no idea how. If you want to see an app that does exactly what I need to do, check out GILT Groupe in the iPad app store. They pulled it off, but they seem to have programmed an entire custom view transition set. Help would be greatly appreciated.

    Read the article

  • cookieless sessions with ajax

    - by thezver
    ok, i know you get sick from this subject. me too :( I've been developing a quite "big application" with PHP & kohana framework past 2 years, somewhat-successfully using my framework's authentication mechanism. but within this time, and as the app grown, many concerning state-preservation issues arisen. main problems are that cookie-driven sessions: can't be used for web-service access ( at least it's really not nice to do so.. ) in many cases problematic with mobile access don't allow multiple simultaneous apps on same browser ( can be resolved by hard trickery, but still.. ) requires many configurations and mess to work 100% right, and that's without the --browser issues ( disabled cookies, old browsers bugs & vulnerabilities etc ) many other session flaws stated in this old thread : http://lists.nyphp.org/pipermail/talk/2006-December/020358.html After a really long research, and without any good library/on-hand-solution to feet my needs, i came up with a custom solution to majority of those problems . Basically, i'ts about emulating sessions with ajax calls, with additional security/performance measures: state preserved by interchanging SID(+hash) with client on ajax calls. state data saved in memcache(or equivalent), indexed by SID security achieved by: appending unpredictible hash to SID egenerating hash on each request & validating it validating fingerprint of client on each request ( referrer,os,browser etc) (*)condition: ajax calls are not simultaneous, to prevent race-condition with session token. (hopefully Ext-Direct solves that for me) From the first glance that supposed to be not-less-secure than equivalent cookie-driven implementation, and at the same time it's simple, maintainable, and resolves all the cookies flaws.. But i'm really concerned because i often hear the rule "don't try to implement custom security solutions". I will really appreciate any serious feedback about my method, and any alternatives. also, any tip about how to preserve state on page-refresh without cookies would be great :) but thats small technical prob. Sorry if i overlooked some similar post.. there are billions of them about sessions . Big thanks in advance ( and for reading until here ! ).

    Read the article

  • VB.Net plugin using Matlab COM Automation Server...Error: 'Could not load Interop.MLApp'

    - by Ben
    My Problem: I am using Matlab COM Automation Server to call and execute matlab .m files from a VB.Net plugin for a CAD program called Rhino 3D. The code works flawlessly when set up as a simple Windows Application in Visual Studio, but when I insert it (and make the requisite reference) into my .Net plugin and test it in the CAD program I get the following error: "Could not load file or assembly 'Interop.MLApp, Version 1.0.0.0, culture=neutral, PublicKeyToken=null' or one of its dependencies. the system cannot find the file specified." What I've Tried: I am baffled as to why this occurs, but I was able to contact the CAD program's technical support staff and they suggested that it has something to do with their DotNet SDK having trouble with references that are located far outside the CAD program directory. They didn't have any solutions so I tried playing around with copylocal and this made no difference. I tried using other COM libraries and the Open Office automation server works fine, although uses url's instead of requiring a reference. I also tested Excel, which does require a reference, and it returned the error: "retrieving the COM class factory for component with CLSID {...} failed due to the following error: 80040154." This may or may not be related to the issue with the Matlab COM reference, but I thought was worthwhile to share. Perhaps is there another way to reference Interop.MLApp? I would appreciate any suggestions or thoughts on how I might make the Matlab Interop.MLApp reference work. Best regards, Ben

    Read the article

  • Using CakePHP with GoDaddy IIS 7 IIS7 and Microsoft URL Rewriter

    - by ricky
    Hi, I'm trying to move a CakePHP app from a Windows Apache setup to a GoDaddy shared IIS7 setup. It's been easy to migrate except for the Apache mod_rewrite part -- which obviously wouldn't work in IIS7. I basically have no url rewriting capability, which is crucial for Cake to work. GoDaddy now offers MS URL Rewriter, but they don't offer technical support for it. I haven't seen any blog post that discusses how to do this in detail. I'd really like to avoid third-party software, especially since GoDaddy provides MS URL Rewriter, which ought to be more than sufficient. The mod_rewrite directives that will allow Cake to work on GoDaddy look ridiculously easy (pasted below); can someone help me convert it to a web.config I can use with URL Rewriter? The URL Rewriter manual is really long and complicated. I'd rather not have to read the whole thing if I don't have to. Here's the contents of the apache .htaccess file: <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ /index.php?url=$1 [QSA,L] </IfModule> Here's a link that discusses GoDaddy's limited support for URL Rewriter: http://stackoverflow.com/questions/416727/url-rewriting-under-iis-at-godaddy Many thanks! Rich

    Read the article

  • How to prove writing specifications beats code cowboys?

    - by Andrew Grant
    So I have a problem. Or rather my friend has a problem, since I would never write about my company on an internet forum. At my friend's company specification writing is, shall we say, a little underused. There's a deeply ingrained culture of writing code first and asking questions later, whether it's for a library routine or a new tool to inflict on their long suffering designers. This of course leads to situations where functionality is partially correct, incorrect, or just completely missing ("oh, just save before trying anything you may want to undo"). This usually results in a loss of productivity for those poor designers, or beta periods where bug-fixing is largely spent implementing things correctly. My friend's found his suggestions of writing (and testing against) specifications to be generally well received. Most of his colleagues have embraced the wonderful feeling of discovering false-assumptions on paper, instead of at 11pm on a Sunday in the middle of beta. Viva La Revolution! However there are a few who poo-poo anything that stands between their task and a keyboard. They laugh at the thought of actually designing anything, and write code with merry abandon. Mostly these are senior, long employed developers, reluctant to "waste time". The problem is that this second group of heretics invariably produce things (or at least something) quicker than the first. Subsequently this becomes justification along the lines of "It's pointless to write specifications for something as simple as an image resizer! Oh and those bugs where width!=height or the image uses RLE just need a few tweaks". And now the question :) Other than saying "told you so" at the end of a project, what are some good short-term ways to demonstrate how the practice of writing functional or technical specifications leads to better software in the long run? Cheers!

    Read the article

  • UDDI Best Practices

    - by Andrew Cripps
    My organisation is getting into the SOA world (a bit late, but that's what it's like here!) and we're looking into the ESB Toolkit 2.0 (we already have BizTalk Server 2009). We're keen on implementing UDDI (specifically, the UDDI Services v3.0 that ships with BTS 2009), but we're low on actual UDDI experience. We want to manage the ever-burgeoning number of web services we have across all our environments. What are the best practices for implementing UDDI? For example:- Would you implement a single highly-available resilient UDDI server that hosts all services and bindings, including test environment versions? Or would you implement separate UDDI repositories for test and production environments? I'm aware of the Oasis Technical Note v2.0 on WSDL and UDDI, but does anyone actually implement that? I.e. the abstract parts of the WSDL as tModels, the implementation parts of the WSDL as bindings? Would you go to the effort of capturing non-web service endpoints in UDDI, or just use it for WSDL? What are the "gotchas"?

    Read the article

  • Regex query: how can I search PDFs for a phrase where words in that phrase appear on more than one l

    - by Alison
    I am trying to set up an index page for the weekly magazine I work on. It is to show readers the names of companies mentioned in that weeks' issue, plus the page numbers they are appear on. I want to search all the PDF files for the week, where one PDF = one magazine page (originally made in Adobe InDesign CS3 and Adobe InCopy CS3). I have set up a list of companies I want to search for and, using PowerGREP and using delimited regular expressions, I am able to find most page numbers where a company is mentioned. However, where a company name contains two or more words, the search I am running will not pick up instances where the name appears over more than one line. For example, when looking for "CB Richard Ellis" and "Cushman & Wakefield", I got no result when the text appeared like this: DTZ beat BNP PRE, CB [line break here] Richard Ellis and Cushman & [line break here] Wakefield to secure the contract. [line end here] Could someone advise me on how to write a regular expression that will ignore white space between words and ignore line endings OR one that will look for the words including all types of white space (ie uneven spaces between words; spaces at the end of lines or line endings; and tabs (I am guessing that this info is imbedded somehow in PDF files). Here is a sample of the set of terms I have asked PowerGREP to search for: \bCB Richard Ellis\b \bCB Richard Ellis Hotels\b \bCentaur Services\b \bChapman Herbert\b \bCharities Property Fund\b \bChetwoods Architects\b \bChurch Commissioners\b \bClive Emson\b \bClothworkers’ Company\b \bColliers CRE\b \bCombined English Stores Group\b \bCommercial Estates Group\b \bConnells\b \bCooke & Powell\b \bCordea Savills\b \bCrown Estate\b \bCushman & Wakefield\b \bCWM Retail Property Advisors\b [Note that there is a delimited hard return between each \b at the end of each phrase and beginnong of the next phrase.] By the way, I am a production journalist and not usually involved in finding IT-type solutions and am finding it difficult to get to grips with the technical language on the PowerGREP site. Thanks for assistance Alison

    Read the article

  • Using Essential Use Cases to design a UI-centric Application

    - by Bruno Brant
    Hello all, I'm begging a new project (oh, how I love the fresh taste of a new project!) and we are just starting to design it. In short: The application is a UI that will enable users to model an execution flow (a Visio like drag & drop interface). So our greatest concern is usability and features that will help the users model fast and clearly the execution flow. Our established methodology makes extensive use of Use Cases in order to create a harmonious view of the application between the programmers and users. This is a business concern, really: I'd prefer to use an Agile Method with User Stories rather than User Cases, but we need to define a clear scope to sell the product to our clients. However, Use Cases have a number of flaws, most of which are related to the fact that they include technical details, like UI, etc, as can be seem here. But, since we can't use User Stories and a fully interactive design, I've decided that we compromise: I will be using Essential Use Cases in order to hide those details. Now I have another problem: it's essential (no pun intended) to have a clear description of UI interaction, so, how should I document it? In other words, how do I specify a application through the use of Essential Use Cases where the UI interaction is vital to it? I can see some alternatives: Abandon the use of Use Cases since they don't correctly represent the problem Do not include interface descriptions in the use case, but create another documentation (Story Boards) and link then to the Essential Use Cases Include UI interaction description to the Essential Use Cases, since they are part of the business rules in the perspective of the users and the application itself

    Read the article

  • Advice for Future Programmers?

    - by Nate Zaugg
    I have a buddy that is going to be giving some presentations to high-schoolers. Specifically he asked: What would you be looking for if they approached you about work? Perhaps you are in that age group right now. What do you want to know? Perhaps you are just a few years into the workforce. What do you wish someone had told you but never did? Perhaps you have children, relatives or friends in or soon to be in that age group. What are you worried they don't know about? I'm sure there are other perspectives and questions I'm not even thinking about. I'd like to hear what you have to say about it. Here was my list: Don't be afraid to try! Don't let the perception that something is too difficult stop you from experimenting. Curiosity may have killed the cat, but an un-inquisitive person is mostly useless. Stolen from Einstein: You don't really understand something until you can explain it to your grandmother. It's never enough to be smart, you also have to work well with others. Before you can be really smart, you must learn how to learn. There will always be someone smarter than you are -- Become their buddy! Get to know great minds and learn all you can. Some knowledge can only be expressed this way. Communication, Communication, Communication! Projects rarely fail because of technical reasons and the difference between good programmers and outstanding programmers is how well they communicate. A good work ethic never goes unnoticed. Know when to ask for help and when to figure something out for yourself.

    Read the article

  • Can i use a different parser for Axis 1.4?

    - by NishM
    The current SAX parser takes a lot of time (20 minutes) and heap memory(around 400mb) to deserialize the response coming from the soap server as per the logs. Our response XMLs are of average size 4 mb. A part of the log when it runs the applicaiton out of heap is below DEBUG (org.apache.axis.encoding.DeserializationContext) Pushing handler org.apache.axis.message.SOAPHandler@16d22f1 DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(newElem00) DEBUG (org.apache.axis.message.MessageElement) New MessageElement (org.apache.axis.message.MessageElement@112c22) named {}name DEBUG (org.apache.axis.encoding.DeserializationContext) Pushing element name DEBUG (org.apache.axis.utils.NSStack) NSPush (32) DEBUG (org.apache.axis.encoding.DeserializationContext) Exit: DeserializationContext::startElement() DEBUG (org.apache.axis.encoding.DeserializationContext) Enter: DeserializationContext::endElement(, name) DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(popHandler00) DEBUG (org.apache.axis.encoding.DeserializationContext) Popping handler org.apache.axis.message.SOAPHandler@16d22f1 DEBUG (org.apache.axis.utils.NSStack) NSPop (32) DEBUG (org.apache.axis.encoding.DeserializationContext) Popped element stack to org.apache.axis.message.MessageElement:property DEBUG (org.apache.axis.encoding.DeserializationContext) Exit: DeserializationContext::endElement() DEBUG (org.apache.axis.encoding.DeserializationContext) Enter: DeserializationContext::startElement(, value) DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(pushHandler00) DEBUG (org.apache.axis.encoding.DeserializationContext) Pushing handler org.apache.axis.message.SOAPHandler@16880ba DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(newElem00) DEBUG (org.apache.axis.message.MessageElement) New MessageElement (org.apache.axis.message.MessageElement@1db74af) named {}value DEBUG (org.apache.axis.encoding.DeserializationContext) Pushing element value DEBUG (org.apache.axis.utils.NSStack) NSPush (32) DEBUG (org.apache.axis.encoding.DeserializationContext) Exit: DeserializationContext::startElement() DEBUG (org.apache.axis.encoding.DeserializationContext) Enter: DeserializationContext::endElement(, value) DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(popHandler00) DEBUG (org.apache.axis.encoding.DeserializationContext) Popping handler org.apache.axis.message.SOAPHandler@16880ba DEBUG (org.apache.axis.utils.NSStack) NSPop (32) I cannot use Axis2 because of technical reasons. I have tried using HTTP Commons client instead of HTTP client but the response time remains the same. How can i link a different parser(example xerces 2.10.0 or xstream 1.3.1?) to Axis 1.4 framework in this context so that memory management and response time is favorable?.

    Read the article

  • A PHP script to stream internet radio?

    - by Honus Wagner
    I've been searching and searching and I haven't yet come up with a solution to host my own streaming audio player. I'm looking for a way to host an internet radio player that connects to whatever streams I enter in and plays them. I'm not looking to play my MP3s or anything like that. I'm looking to play content from 181.fm or 1Club.fm, for example. I'd even settle for ShoutCast-only streams. I've been to www.wavestreaming.com but it didnt work for me. I'm guessing its because in the very first box where you enter your website url, it leads in for you: http//www. then you fill in the rest. My site is https:// and does not contain a www. in the URL. I'm guessing that has something to do with it. Any links, suggestions for search topics, or even a brief technical overview of what I should be looking into would be greatly appreciated. Thanks for your time.

    Read the article

  • How to Increase the time till a read timeout error occurs?

    - by Alex
    Hi all. I've written in PHP a script that takes a long time to execute [Image processing for thousands of pictures]. It's a meter of hours - maybe 5. After 15 minutes of processing, I get the error: ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: The URL which I clicked Read Timeout The system returned: [No Error] A Timeout occurred while waiting to read data from the network. The network or server may be down or congested. Please retry your request. Your cache administrator is webmaster. What I need is to enable that script to run for much longer. Now, here are all the technical info: I'm writing in PHP and using the Zend Framework. I'm using Firefox. The long script that is processed is done after clicking a link. Obviously, since the script is not over I see the web page on which the link was and the web browser writes "waiting for ...". After 15 minutes the error occurs. I tried to make changes to Firefox threw about:config but without any success. I don't know, but the changes might be needed somewhere else. So, any ideas? Thanks ahead.

    Read the article

< Previous Page | 643 644 645 646 647 648 649 650 651 652 653 654  | Next Page >