Search Results

Search found 66801 results on 2673 pages for 'near real time'.

Page 480/2673 | < Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >

  • .NET developer needs FoxPro advice

    - by katit
    We have a prospect with FoxPro 2.6 (whatever it means) system. Our product integrates with other systems by the means of triggers (usually). We would place couple of triggers on X system and then just pull collected data for our use. This way there is no need to customize customers product and it works great(almost real time - we poll for changes every 30 seconds). Question: Can I put triggers on FoxPro 2.6? Can access FoxPro from .NET? Any catches/caveats?

    Read the article

  • Performance tracking/monitoring in games

    - by vitaliy kotik
    Let's say I have an online game with a downloadable client / browser plugin. I want to track performance of my software and automatically send summary to the server. Let it be fps, latency, load time, physics step calc. time, whatever... I also want tools to perform data analysis: per session stats, per hardware stats, avgs, totals, diagrams, etc. So that I could see what are the real world hotspots / bottlenecks. Is there any common out-of-the-box / SaS solution?

    Read the article

  • Some Flash moving portions displayed incorrectly in Firefox 5

    - by hansioux
    I am using 11.04 Natty and Firefox 5. Video card is ATI Radeon HD 4670 and I am using the newest proprietary driver downloaded from ATI website. Some Flash apps, such as embedded youtube vids and a couple of other apps, are displayed incorrectly (i think with real embed instead of the new iframe embed). What happens is as a portion of the flash moves around, parts of it keeps blanking out. For the embedded youtube vid, it greys out. Some other apps turns transparent, showing the webpage's background. The same flash apps runs perfectly fine for Google Chromium on the same computer. I have another computer with 10.10 Maverick, Firefox 5 and nVidia card and driver, and it they also run fine. So, anyone has an idea of what is causing the problem?

    Read the article

  • If unexpected database changes cause you problems – we can help!

    - by Chris Smith
    Have you ever been surprised by an unexpected difference between you database environments? Have you ever found that your Staging database is not the same as your Production database, even though it was the week before? Has an emergency hotfix suddenly appeared in Production over the weekend without your knowledge? Has your client secretly added a couple of indices to their local version of the database to aid performance? Worse still, has a developer ever accidently run a SQL script against the wrong database without noticing their mistake? If you’ve answered “Yes” to any of the above questions then you’ve suffered from ‘drift’. Database drift is where the state of a database (schema, particularly) has moved away from its expected or official state over time. The upshot is that the database is in an unknown or poorly-understood state. Even if these unexpected changes are not destructive, drift can be a big problem when it’s time to release a new version of the database. A deployment to a target database in an unexpected state can error and fail, potentially delaying a vital, time-sensitive update. A big issue with drift is that it can be hard to spot and it can be even harder to determine its provenance. So, before you can deal with an issue caused by drift, you’ll need to know exactly what change has been made, who made it, when they made it and why they made it. Those questions can take a lot of effort to answer. Then you actually need to decide what to do. Do you rollback the change because it was bad? Retrospectively apply it to the Staging environment because it is a required change? Or script the change into version control to get it back in line with your process? Red Gate’s Database Delivery Team have been talking to DBAs, database consultants and database developers to explore the problem of drift. We’ve started to get a really good idea of how big a problem it can be and what database professionals need to know and do, in order to deal with it.  It’s fair to say, we’re pretty excited at the prospect of creating a tool that will really help and we’ve got some great feedback on our initial ideas (see image below).   We’re now well underway with the development of our new drift-spotting product – SQL Lighthouse – and we hope to have a beta release out towards the end of July. What we really need is your help to shape the product into a great tool. So, if database drift is a problem that you’d like help solving and are interested in finding out more about our product, join our mailing list to register your interest in trying out the beta release. Subscribe to our mailing list

    Read the article

  • Detect frameworks and/or CMS utilized on websites in Firefox

    - by jkneip
    I'm redesigning the website for my academic library and am examining other sites to determine to identify the technologies used. Things like: Web frameworks Javascript frameworks Server-side technology Content management system Now I've had some real success in Firefox using plugins like Wappalyzer, Firebug, and the DOM Inspector. But some sites just don't display any of the info. I'm looking for using these tools, especially it seems it an enterprise-level CMS is being used. Does anyone know of any other tools to detect this kind of data? Also with Firebug & the DOM Inspector, there is a lot of info. displayed and I wondered if there was a way to derive the presence of server-side technologies, CMS's, etc. within certain elements of a web page? Also, if this question is more relevant to another Stack Exchange site, please let me know and I'll post it there instead. Much thanks, Jason

    Read the article

  • Changing the sequencing strategy for File/Ftp Adapter

    - by [email protected]
    The File/Ftp Adapter allows the user to configure the outbound write to use a sequence number. For example, if I choose address-data_%SEQ%.txt as the FileNamingConvention, then all my files would be generated as address-data_1.txt, address-data_2.txt,...and so on. But, where does this sequence number come from? The answer lies in the "control directory" for the particular adapter project(or scenario). In general, for every project that use the File or Ftp Adapter, a unique directory is created for book keeping purposes. And since this control directory is required to be unique, the adapter uses a digest to make sure that no two control directories are the same. For example, for my FlatStructure sample, the control information for my project would go under FMW_HOME/user_projects/domains/soainfra/fileftp/controlFiles/[DIGEST]/outbound where the value of DIGEST would differ from one project to another. If you look under this directory, you will see a file control_ob.properties and this is where the sequence number is maintained. Please note that the sequence number is maintained in binary form and you hence you might need a hex editor to view its content. You will also see another zero byte file, SEQ_nnn, but, ignore that for now. We'll get to it some other time. For now, please remember that this extra file is maintained as a backup. One of the challenges faced by the adapter runtime is to guard all writes to the control files so no two threads inadverently try to update them at the same time. And, it does so with the help of a "Mutex". For now, please remember that the mutex comes in different flavors: In-memory DB-based Coherence-based User-defined Again, we will talk about these mutexes some other time. Please note that there might be scenarios, particularly under heavy load, where the mutex might become a bottleneck. The adapter, however,  allows you to change the configuration so that the adapter sequence value comes from a database sequence or a stored procedure and in such situation, the mutex is acually by-passed and thereby resulting in better throughputs. In later releases, the behavior of the adapter would be defaulted to use a db-sequence.  The simplest way to achieve this is by switching your JNDI for the outbound JCA file to use "eis/HAFileAdapter" as shown   But, what does this do? Internally, the adapter runtime creates a sequence on the oracle database. For example, if you do a "select * from user_sequences" in your soa-infra schema, you will see a new sequence being created with name as SEQ_<GUID>__ where the GUID will differ from one project to another. However, if you want to use your own sequence, then it would require you to add a new property to your JCA file called SequenceName as shown below. Please note that you will need to create this sequence on your soainfra schema beforehand.     But, what if we use DB2 or MSSQL Server as the dehydration support? DB2 supports sequences natively but MSSQL Server does not. So, the adapter runtime uses a natively generated sequence for DB2, but, for MSSQL server, the adapter relies on a stored procedure that ships with the product. If you wish to achieve the same result for SOA Suite running DB2 as the dehydration store, simply change your connection factory JNDI name in the JCA file to eis/HAFileAdapterDB2 and for MSSQL, please use eis/HAFileAdapterMSSQL. And, if you wish to use a stored procedure other than the one that ships with the product, you will need to rely on binding properties to override the adapter behavior. Particularly, you will need to instruct the adapter that you wish to use a stored procedure as shown:       Please note that if you're using the File/Ftp Adapter in Append mode, then the adapter runtime degrades the mutex to use pessimistic locks as we don't want writers from different nodes to append to the same file at the same time.                    

    Read the article

  • Adventures in Scrum: Lesson 1 &ndash; The failed Sprint

    - by Martin Hinshelwood
    I recently had a conversation with a product owner that wanted to have the Scrum team broken up into smaller units so that less time was wasted on the Scrum Ceremonies! Their complaint was around the need in Scrum to have the entire “Team” (7+-2) involved in the sizing of the work during the “Sprint Planning Meeting”.  The standard flippant answer of all Scrum professionals, “Well that's not Scrum”, does not get you any brownie points in these situations. The response could be “Well we are not doing Scrum then” which in turn leads to “We are doing Scrum…But, we have split the scrum team into units of 2/3 so that they can concentrate on a specific area of work”. While this may work, it is not Scrum and should not be called so… It is just a form of Agile. Don’t get me wrong at this stage, there is nothing wrong with Agile, just don’t call it Scrum. The reason that the Product Owner wants to do this is that, in effect, through a number of miscommunications and failings in our implementation of Scrum, there was NO unit of potentially Shippable software at the end of the first sprint. It does not matter to them that most Scrum teams will fail the first Sprint, even those that are high performing teams. Remember it is the product owners their money! We should NOT break up scrum teams into smaller units for the purpose of having less people tied up in the Scrum Ceremonies. The amount of backlog the Team selects is solely up to the Team… Only the Team can assess what it can accomplish over the upcoming Sprint. - Scrum Guide, Scrum.org The entire team must accept the work and in order to understand what they can accept they must be free to size it as a team. This both encourages common understanding and increases visibility on why team members think a task is of a particular size. This has the benefit of increasing the knowledge of the entire team in the problem domain. A new Team often first realizes that it will either sink or swim as a Team, not individually, in this meeting. The Team realizes that it must rely on itself. As it realizes this, it starts to self-organize to take on the characteristics and behaviour of a real Team. - Scrum Guide, Scrum.org This paragraph goes to the why of having the whole team at the meeting; The goal of Scrum it to produce a unit of potentially shippable software at the end of every Sprint. In order to achieve this we need high performing teams and this is what Scrum as a framework has been optimised to produce. I think that our Product Owner is understandably upset over loosing two weeks work and is losing sight the end goal of Scrum in the failures of the moment. As the man spending the money, I completely understand his perspective and I think that we should not have started Scrum on an internal project, but selected a customer  that is open to the ideas and complications of Scrum. So, what should we have NOT done on our first Scrum project: Should not have had 3 interns as the only on site resource – This lead to bad practices as the experienced guys were not there helping and correcting as they usually would. Should not have had the only experienced guys offsite – With both the experienced technical guys in completely different time zones it was difficult to get time for questions. Helping the guys on site was just plain impossible. Should not have used a part time ScrumMaster – Although the ScrumMaster attended all of the Ceremonies, because they are only in 2 full days of the week it makes it difficult for the team to raise impediments as they go. Should not have used a proxy product owner. – This was probably the worst decision that was made. Mainly because the proxy product owner did not have the same vision as the product owner. While Scrum does not explicitly reject the idea of a Proxy Product Owner, I do not think it works very well in practice. The “single wringable neck” needs to contain both the Money and the Vision as well as attending the required meetings. I will be brining all of these things up at the Sprint Retrospective and we will learn from our mistakes and move on. Do, Inspect then Adapt…   Technorati Tags: Scrum,Sprint Planing,Sprint Retrospective,Scrum.org,Scrum Guide,Scrum Ceremonies,Scrummaster,Product Owner Need Help? Professional Scrum Developer Training SSW has six Professional Scrum Developer Trainers who specialise in training your developers in implementing Scrum with Microsoft's Visual Studio ALM tools.

    Read the article

  • Does TDD lead to the good design?

    - by Eugen Martynov
    I'm in transition from "writing unit tests" state to TDD. I saw as Johannes Brodwall creates quite acceptable design from avoiding any of architecture phase before. I'll ask him soon if it was real improvisation or he had some thoughts upfront. I also clearly understand that everyone has experience that prevents to write explicit design bad patterns. But after participating in code retreat I hardly believe that writing test first could save us from mistakes. But I also believe that tests after code will lead to mistakes much faster. So this night question is asking for people who is using TDD for a long time share their experience about results of design without upfront thinking. If they really practice it and get mostly suitable design. Or it's my small understanding about TDD and probably agile.

    Read the article

  • SQL SERVER – BI Quiz Hint – Performance Tuning Cubes – Hints

    - by pinaldave
    I earlier wrote about SQL BI Quiz over here and here. The details of the quiz is here: Working with huge data is very common when it is about Data Warehousing. It is necessary to create Cubes on the data to make it meaningful and consumable. There are cases when retrieving the data from cube takes lots of the time. Let us assume that your cube is returning you data very quickly. Suddenly on one day it is returning the data very slowly. What are the three things will you to diagnose this. After diagnose what you will do to resolve performance issue. Participate in my question over here I required BI Expert Jason Thomas to help with few hints to blog readers. He is one of the leading SSAS expert and writes a complicated subject in simple words. If queries were executing properly before but now take a long time to return the data, it means that there has been a change in the environment in which it is running. Some possible changes are listed below:-  1) Data factors:- Compare the data size then and now. Increase in data can result in different execution times. Poorly written queries as well as poor design will not start showing issues till the data grows. How to find it out? (Ans : SQL Server profiler and Perfmon Counters can be used for identifying the issues and performance  tuning the MDX queries)  2) Internal Factors:- Is some slow MDX query / multiple mdx queries running at the same time, which was not running when you had tested it before? Is there any locking happening due to proactive caching or processing operations? Are the measure group caches being cleared by processing operations? (Ans : Again, profiler and perfmon counters will help in finding it out. Load testing can be done using AS Performance Workbench (http://asperfwb.codeplex.com/) by running multiple queries at once)  3) External factors:- Is some other application competing for the same resources?  HINT : Read “Identifying and Resolving MDX Query Performance Bottlenecks in SQL Server 2005 Analysis Services” (http://sqlcat.com/whitepapers/archive/2007/12/16/identifying-and-resolving-mdx-query-performance-bottlenecks-in-sql-server-2005-analysis-services.aspx) Well, these are great tips. Now win big prizes by participate in my question over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What are tangible advantages to proper Unit Tests over Functional Test called unit tests

    - by Jackie
    A project I am working on has a bunch of legacy tests that were not properly mocked out. Because of this the only dependency it has is EasyMock, which doesn't support statics, constructors with arguments, etc. The tests instead rely on database connections and such to "run" the tests. Adding powermock to handle these cases is being shot down as cost prohibitive due to the need to upgrade the existing project to support it (Another discussion). My questions are, what are the REAL world tangible benifits of proper unit testing I can use to push back? Are there any? Am I just being a stickler by saying that bad unit tests (even if they work) are bad? Is code coverage just as effective?

    Read the article

  • MCM Lab exam this week

    - by Rob Farley
    In two days I’ll’ve finished the MCM Lab exam, 88-971. If you do an internet search for 88-971, it’ll tell you the answer is –883. Obviously. It’ll also give you a link to the actual exam page, which is useful too, once you’ve finished being distracted by the calculator instead of going to the thing you’re actually looking for. (Do people actually search the internet for the results of mathematical questions? Really?) The list of Skills Measured for this exam is quite short, but can essentially be broken down into one word “Anything”. The Preparation Materials section is even better. Classroom Training – none available. Microsoft E-Learning – none available. Microsoft Press Books – none available. Practice Tests – none available. But there are links to Readiness Videos and a page which has no resources listed, but tells you a list of people who have already qualified. Three in Australia who have MCM SQL Server 2008 so far. The list doesn’t include some of the latest batch, such as Jason Strate or Tom LaRock. I’ve used SQL Server for almost 15 years. During that time I’ve been awarded SQL Server MVP seven times, but the MVP award doesn’t actually mean all that much when considering this particular certification. I know lots of MVPs who have tried this particular exam and failed – including Jason and Tom. Right now, I have no idea whether I’ll pass or not. People tell me I’ll pass no problem, but I honestly have no idea. There’s something about that “Anything” aspect that worries me. I keep looking at the list of things in the Readiness Videos, and think to myself “I’m comfortable with Resource Governor (or whatever) – that should be fine.” Except that then I feel like I maybe don’t know all the different things that can go wrong with Resource Governor (or whatever), and I wonder what kind of situations I’ll be faced with. And then I find myself looking through the stuff that’s explained in the videos, and wondering what kinds of things I should know that I don’t, and then I get amazingly bored and frustrated (after all, I tell people that these exams aren’t supposed to be studied for – you’ve been studying for the last 15 years, right?), and I figure “What’s the worst that can happen? A fail?” I’m told that the exam provides a list of scenarios (maybe 14 of them?) and you have 5.5 hours to complete them. When I say “complete”, I mean complete – you don’t get to leave them unfinished, that’ll get you ‘nil points’ for that scenario. Apparently no-one gets to complete all of them. Now, I’m a consultant. I get called on to fix the problems that people have on their SQL boxes. Sometimes this involves fixing corruption. Sometimes it’s figuring out some performance problem. Sometimes it’s as straight forward as getting past a full transaction log; sometimes it’s as tricky as recovering a database that has lost its metadata, without backups. Most situations aren’t a problem, but I also have the confidence of being able to do internet searches to verify my maths (in case I forget it’s –883). In the exam, I’ll have maybe twenty minutes per scenario (but if I need longer, I’ll have to take longer – no point in stopping half way if it takes more than twenty minutes, unless I don’t see an end coming up), so I’ll have time constraints too. And of course, I won’t have any of my usual tools. I can’t take scripts in, I can’t take staff members. Hopefully I can use the coffee machine that will be in the room. I figure it’s going to feel like one of those days when I’ve gone into a client site, and found that the problems are way worse than I expected, and that the site is down, with people standing over me needing me to get things right first time... ...so it should be fine, I’ve done that before. :) If I do fail, it won’t make me any less of a consultant. It won’t make me any less able to help all of my clients (including you if you get in touch – hehe), it’ll just mean that the particular problem might’ve taken me more than the twenty minutes that the exam gave me. @rob_farley PS: Apparently the done thing is to NOT advertise that you’re sitting the exam at a particular time, only that you’re expecting to take it at some point in the future. I think it’s akin to the idea of not telling people you’re pregnant for the first few months – it’s just in case the worst happens. Personally, I’m happy to tell you all that I’m going to take this exam the day after tomorrow (which is the 19th in the US, the 20th here). If I end up failing, you can all commiserate and tell me that I’m not actually as unqualified as I feel.

    Read the article

  • Clone an Azure VM using Powershell

    - by jamiet
    In a few months time I will, in association with Technitrain, be running a training course entitled Introduction to SQL Server Data Tools. I am currently working on putting together some hands-on lab material for the course delegates and have decided that in order to save time in asking people to install software during the course I am simply going to prepare a virtual machine (VM) containing all the software and lab material for each delegate to use. Given that I am an MSDN subscriber it makes sense to use Windows Azure to host those VMs given that it will be close to, if not completely, free to do so. What I don’t want to do however is separately build a VM for each delegate, I would much rather build one VM and clone it for each delegate. I’ve spent a bit of time figuring out how to do this using Powershell and in this blog post I am sharing a script that will: Prompt for some information (Azure credentials, Azure subscription name, VM name, username & password, etc…) Create a VM on Azure using that information Prompt you to sysprep the VM and image it (this part can’t be done with Powershell so has to be done manually, a link to instructions is provided in the script output) Create three new VMs based on the image Remove those three VMs Simply download the script and execute it within Powershell, assuming you have an Azure account it should take about 20minutes to execute (spinning up VMs and shutting the down isn’t instantaneous). If you experience any issues please do let me know. There are additional notes below. Hope this is useful! @Jamiet  Notes: Obviously there isn’t a lot of point in creating some new VMs and then instantly deleting them. However, this demo script does provide everything you need should you want to do any of these operations in isolation. The names of the three VMs that get created will be suffixed with 001, 002, 003 but you can edit the script to call them whatever you like. The script doesn’t totally clean up after itself. If you specify a service name & storage account name that don’t already exist then it will create them however it won’t remove them when everything is complete. The created image file will also not be deleted. Removing these items can be done by visiting http://manage.windowsazure.com. When creating the image, ensure you use the correct name (the script output tells you what name to use): Here are some screenshots taken from running the script: When the third and final VM gets removed you are asked to confirm via this dialog: Select ‘Yes’

    Read the article

  • Getting developers and support to work together

    - by Matt Watson
    Agile development has ushered in the norm of rapid iterations and change within products. One of the biggest challenges for agile development is educating the rest of the company. At my last company our biggest challenge was trying to continually train 100 employees in our customer support and training departments. It's easy to write release notes and email them to everyone. But for complex software products, release notes are not usually enough detail. You really have to educate your employees on the WHO, WHAT, WHERE, WHY, WHEN of every item. If you don't do this, you end up with customer service people who know less about your product than your users do. Ever call a company and feel like you know more about their product than their customer service people do? Yeah. I'm talking about that problem.WHO does the change effect?WHAT was the actual change?WHERE do I find the change in the product?WHY was the change made? (It's hard to support something if you don't know why it was done.)WHEN will the change be released?One thing I want to stress is the importance of the WHY something was done. For customer support people to be really good at their job, they need to understand the product and how people use it. Knowing how to enable a feature is one thing. Knowing why someone would want to enable it, is a whole different thing and the difference in good customer service. Another challenge is getting support people to better test and document potential bugs before escalating them to development. Trying to fix bugs without examples is always fun... NOT. They might as well say "The sky is falling, please fix it!"We need to over train the support staff about product changes and continually stress how they document and test potential product bugs. You also have to train the sales staff and the marketing team. Then there is updating sales materials, your website, product documentation and other items there are always out of date. Every product release causes this vicious circle of trying to educate the rest of the company about the changes.Do we need to record a simple video explaining the changes and email it to everyone? Maybe we should  use a simple online training type app to help with this problem. Ultimately the struggle is taking the time to do the training, but it is time well spent. It may save you a lot of time answering questions and fixing bugs later. How do we efficiently transfer key product knowledge from developers and product owners to the rest of the company? How have you solved these issues at your company?

    Read the article

  • BizTalk 2009 - Custom Functoid Categories

    - by StuartBrierley
    I recently had cause to code a number of custom functoids to aid with some maps that I was writing. Once these were developed and deployed to C:\Program Files\Microsoft BizTalk Server 2009\Developer Tools\Mapper Extensions a quick refresh allowed them to appear in toolbox.  After dropping these on a map and configuring the appropriate inputs I tested the map to check that they worked as expected.  All but one of the functoids worked as expecetd, but the final functoid appeared not to be firing at all. I had already tested the code used in a simple test harness application, so I was confident in the code used, but I still needed to figure out what the problem might be. Debugging the map helped me on the way; for some reason the functoid in question was not shown correctly - the functoid definition was wrong. After some investigations I found that the functoid type you assign when coding a custom functoid affects more than just the category it appears in; different functoid types have different capabilities, including what they can link too.  For example, a logical functoid can not provide content for an output element, it can only say whether the element exists.  Map this via a Value Mapping functoid and the value of true or false can be seen in the output element. The functoid I was having problems with was one whare I had used the XPath functoid type, this had seemed to be a good fit as I was looking up content in a config file using xpath and I wanted it to appear the advanced area.  From the table below you can see that this functoid type is marked as "Internal Only", preventing it from being used for custom functoids.  Changing my type to String allowed the functoid to function as expected. Category Description Toolbox Group Assert Internal Use Only Advanced Conversion Converts characters to and from numerics and converts numbers from one base to another. Conversion Count Internal Use Only Advanced Cumulative Performs accumulations of the value of a field that occurs multiple times in a source document and outputs a single output. Cumulative DatabaseExtract Internal Use Only Database DatabaseLookup Internal Use Only Database DateTime Adds date, time, date and time, or add days to a specified date, in output data. Date/Time ExistenceLooping Internal Use Only Advanced Index Internal Use Only Advanced Iteration Internal Use Only Advanced Keymatch Internal Use Only Advanced Logical Controls conditional behavior of other functoids to determine whether particular output data is created. Logical Looping Internal Use Only Advanced MassCopy Internal Use Only Advanced Math Performs specific numeric calculations such as addition, multiplication, and division. Mathematical NilValue Internal Use Only Advanced Scientific Performs specific scientific calculations such as logarithmic, exponential, and trigonometric functions. Scientific Scripter Internal Use Only Advanced String Manipulates data strings by using well-known string functions such as concatenation, length, find, and trim. String TableExtractor Internal Use Only Advanced TableLooping Internal Use Only Advanced Unknown Internal Use Only Advanced ValueMapping Internal Use Only Advanced XPath Internal Use Only Advanced Links http://msdn.microsoft.com/en-us/library/microsoft.biztalk.basefunctoids.functoidcategory(BTS.20).aspx http://blog.eliasen.dk/CommentView,guid,d33b686b-b059-4381-a0e7-1c56e808f7f0.aspx

    Read the article

  • Is the C programming language still used?

    - by Pankaj Upadhyay
    I am a C# programmer, and most of my development is for websites along with a few Windows application. As far as C goes, I haven't used it in a long time, as there was no need to. It came to me as a surprise when one of my friends said that she needs to learn C for testing jobs, while I was helping her learn C#. I figured that someone would only learn C for testing only if there is development done in C. In my knowledge, all the development related to COM and hardware design are also done in C++. Therefore, learning C doesn't make sense if you need to use C++. I also don't believe in historic significance, so why waste time and money in learning C? Is C is still used in any kind of new software development or anything else?

    Read the article

  • High Jinks, Hi Jacks, Exceptional DBA Awards and PASS

    - by Rodney
    The countdown to PASS has counted down.  The day after tomorrow I will board a plane, like many others, on my way for the 4th year in a row to SQL PASS Summit.  The anticipation has been excruciating but luckily I have this little thing called a day job as a DBA that has kept me busy and not thinking too much about the event. Well that is not exactly true since my beautiful wife works for PASS so we get to talk about SQL from the time we wake up until late in the evening. I would not have it any other way and I feel very fortunate to be a part of this great event and to have been chosen as the Exceptional DBA Award judge also for the 4th year in a row.  This year, I will have been again tasked with presenting the award to the winner, Mr. Jeff Moden and it will be a true honor to meet him in person as I have read many of his articles on SSC and have attended his session at PASS previously.  The speech is all ready but one item remains, which will be a surprise to all who attend the party on Tuesday night in Seattle (see links below).  Let's face it, Exceptional DBAs everywhere work very hard protecting our data stores, tuning queries, mentoring, saving money, installing clusters, etc and once in a while there is time to be exceptionally non-professional and have a bit of fun. Once incident that happened this year that falls under the High Jinks category was when my network admin asked if I could Telnet into a SQL instance and see if I could make the connection through the firewall that he had just configured. I was able to establish a connection on port 1433 and it occurred to me that it would be very interesting if I could actually run T-SQL queries via a Telnet session much like you might do with an SMTP server. With that thought, I proceeded to demonstrate this could be possible by convincing my senior DBA Shawn McGehee that I was able to do so. At first he did not believe me. It shook his world view.  It was inconceivable.  What I had done, behind the scenes, of course, was to copy and rename SQLCMD.exe to Telnet.exe and used it to connect and run a simple, "Select * from sys.databases" on the SQL instance. I think if it had been anyone other than Shawn I could have extended this ruse indefinitely but he caught on within 30 seconds. It was a fun thirty seconds though. On the High Jacks side of the house, which is really merged to be SQL HACKS, I finally, after several years of struggling with how to connect to an untrusted domain like in a DMZ with a windows account in SSMS, I stumbled upon a solution that does away with the requirement to use SQL Authentication.  While "Runas" is a great command to use to run an application with a higher privileged account, I had not previously been able to figure out how to connect to the remote domain with SSMS and "Runsas". It never connected and caused a login failure every time for the remote windows domain account. Then I ran across an option for "Runas",   "/netonly".  This option postpones the login until a connection is made and only then passes the remote login you supply when you first launch SSMS with the "Runas" command. So a typical shortcut would look like: "C:\Windows\System32\runas.exe /netonly /user:remotedomain.com\rodlandrum "C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exe" You will want to make sure the passwords are synced between the two domains, your local domain and the remote domain, otherwise you may have account lockout issues, but I have found in weeks of testing this is a stable solution. Now it is time to get ready to head for Seattle. Please, if you see me (@SQLBeat) or my wife (@Karlakay22) please run up and high five me (wait..High Jinks.High Jacks.High Fives.Need to change the title) or give me a big bear hug if you are strong enough to lift me off the ground. And if you do actually do that, I will think you are awesome and will not embarrass you by crying out for help or complaining of a broken back or sciatic nerve damage. And now the links to others who have all of the details. First, for the MVP Deep Dives 2, of which, like John, I was lucky enough to be able to participate in this year. http://www.simple-talk.com/community/blogs/johnm/archive/2011/09/29/103577.aspx And the details of the SSC party where the Exceptional DBA of 2011, Jeff Moden, will be awarded. http://www.simple-talk.com/community/blogs/rebecca_amos/archive/2011/10/05/103661.aspx   Cheers! Rodney

    Read the article

  • Adding my face to my web-site in Google's search result

    - by Roman Matveev
    I'm trying to accomplish the rich snippet to the template of my future web-site. The data format is review and I used the microdata formatting to add all necessary information to the web-page. The Structured Data Testing Tool delivered rating, author information and review date: However there is no my face image and the sections related to authorship are empty: I made all that recommended to link my Google+ profile to the web-site: I did something wrong? Or I will not be able to see my face in the test tools ever and it will be in the real SERP?

    Read the article

  • Any good tutorials all this web programming stuff for a GUI person? [closed]

    - by supercheetah
    For some reason, I am having a hard time understanding all this web programming stuff--from AJAX to JSON, etc. I've got plenty of experience programming GUIs. I'm currently working on a project in Python, and I thought that maybe I could just use PyJS (since it's GWT for Python, it uses an API that's very familiar to experienced GUI programmers like myself) to compile it with a Javascript interface on top, but alas, the compiler gave me a spectacular failure. It's obviously not meant to handle much of any Python beyond itself, and some of the core Python library. It would have been nice if it could, but I will admit, it would have been the lazy way to do it. I tried to learn Django, but for some reason, I'm just having a hard time understanding the tutorial on their website, and what it's all doing. Maybe it's not the best framework to learn, perhaps? Anyway, does anyone have a good primer/tutorial explaining all this stuff, especially for Python, and especially for someone coming from a GUI background?

    Read the article

  • Master Data Management Implementation Styles

    - by david.butler(at)oracle.com
    In any Master Data Management solution deployment, one of the key decisions to be made is the choice of the MDM architecture. Gartner and other analysts describe some different Hub deployment styles, which must be supported by a best of breed MDM solution in order to guarantee the success of the deployment project.   Registry Style: In a Registry Style MDM Hub, the various source systems publish their data and a subscribing Hub stores only the source system IDs, the Foreign Keys (record IDs on source systems) and the key data values needed for matching. The Hub runs the cleansing and matching algorithms and assigns unique global identifiers to the matched records, but does not send any data back to the source systems. The Registry Style MDM Hub uses data federation capabilities to build the "virtual" golden view of the master entity from the connected systems.   Consolidation Style: The Consolidation Style MDM Hub has a physically instantiated, "golden" record stored in the central Hub. The authoring of the data remains distributed across the spoke systems and the master data can be updated based on events, but is not guaranteed to be up to date. The master data in this case is usually not used for transactions, but rather supports reporting; however, it can also be used for reference operationally.   Coexistence Style: The Coexistence Style MDM Hub involves master data that's authored and stored in numerous spoke systems, but includes a physically instantiated golden record in the central Hub and harmonized master data across the application portfolio. The golden record is constructed in the same manner as in the consolidation style, and, in the operational world, Consolidation Style MDM Hubs often evolve into the Coexistence Style. The key difference is that in this architectural style the master data stored in the central MDM system is selectively published out to the subscribing spoke systems.   Transaction Style: In this architecture, the Hub stores, enhances and maintains all the relevant (master) data attributes. It becomes the authoritative source of truth and publishes this valuable information back to the respective source systems. The Hub publishes and writes back the various data elements to the source systems after the linking, cleansing, matching and enriching algorithms have done their work. Upstream, transactional applications can read master data from the MDM Hub, and, potentially, all spoke systems subscribe to updates published from the central system in a form of harmonization. The Hub needs to support merging of master records. Security and visibility policies at the data attribute level need to be supported by the Transaction Style hub, as well.   Adaptive Transaction Style: This is similar to the Transaction Style, but additionally provides the capability to respond to diverse information and process requests across the enterprise. This style emerged most recently to address the limitations of the above approaches. With the Adaptive Transaction Style, the Hub is built as a platform for consolidating data from disparate third party and internal sources and for serving unified master entity views to operational applications, analytical systems or both. This approach delivers a real-time Hub that has a reliable, persistent foundation of master reference and relationship data, along with all the history and lineage of data changes needed for audit and compliance tracking. On top of this persistent master data foundation, the Hub can dynamically aggregate transaction data on demand from different source systems to deliver the unified golden view to downstream systems. Data can also be accessed through batch interfaces, published to a message bus or served through a real-time services layer. New data sources can be readily added in this approach by extending the data model and by configuring the new source mappings and the survivorship rules, meaning that all legacy data hubs can be leveraged to contribute their records/rules into the new transaction hub. Finally, through rich user interfaces for data stewardship, it allows exception handling by business analysts to keep it current with business rules/practices while maintaining the reliability of best-of-breed master records.   Confederation Style: In this architectural style, several Hubs are maintained at departmental and/or agency and/or territorial level, and each of them are connected to the other Hubs either directly or via a central Super-Hub. Each Domain level Hub can be implemented using any of the previously described styles, but normally the Central Super-Hub is a Registry Style one. This is particularly important for Public Sector organizations, where most of the time it is practically or legally impossible to store in a single central hub all the relevant constituent information from all departments.   Oracle MDM Solutions can be deployed according to any of the above MDM architectural styles, and have been specifically designed to fully support the Transaction and Adaptive Transaction styles. Oracle MDM Solutions provide strong data federation and integration capabilities which are key to enabling the use of the Confederated Hub as a possible architectural style approach. Don't lock yourself into a solution that cannot evolve with your needs. With Oracle's support for any type of deployment architecture, its ability to leverage the outstanding capabilities of the Oracle technology stack, and its open interfaces for non-Oracle technology stacks, Oracle MDM Solutions provide a low TCO and a quick ROI by enabling a phased implementation strategy.

    Read the article

  • It is not quiet....

    - by Anthony Shorten
    You may of noticed that the blog entries have been not so frequent lately. I have been extremely busy with putting the final touches on the release of the next generation of our products. It is an exciting time for me to see the release of new functionality from start to finish for the first time becoming a Product Manager (good to see one's designs actually implemented). Once the next generation of the products have been released there will be a flood of entries outlining all the new and exciting features. Watch the skies....

    Read the article

  • Wifi disabled on dell Inspiron Mini 10 (1018)

    - by Manu
    My netbook cannot access the wifi network, and I cannot enable it. This is the second time this has happenned. The first time, it came back after a lot of restarts and BIOS factory resets. Ubuntu sometimes simply won't connect to any wifi networks, and other times it will say that the wireless is blocked by a hardware switch. I have tried pressing F2 and doing a BIOS reset, with no luck. sudo rfkill list all gives : 0: phy0: Wireless LAN Soft blocked: no Hard blocked: yes 1: dell-wifi: Wireless LAN Soft blocked: yes Hard blocked: yes

    Read the article

  • How to include serious personal project in Resume?

    - by mob1lejunkie
    My brother has come up with an interesting business idea that could be commercialised. For over a month I have been creating the foundation for SaaS. I have been treating this as commercial project so designing using patterns and best practices. One of the reasons I want to include this in my Resume is my full time job doesn't involve current trendy ASP.Net technologies (e.g Linq/Entity Relationships, jQuery, ASP.Net MVC 3, Silverlight, etc) so the resume lacks impact. In my full time job I work on a 7 year old well designed product and since our data and web layers work well it would be stupid to re-engineer them only because recruiters think Linq, ASP.Net MVC and jQuery are cool. How can I include my personal project in Resume so that it doesn't sound like experiment or quick'n'dirty pet project? Many thanks.

    Read the article

  • Webinar: Meeting Customer Expectations in the New Age of Retail

    - by Sanjeev Sharma
    Webcast Date: Thursday, November 8, 2012 Time: 10am PT/ 1pm ET The retail market has expanded into the online, mobile, and social worlds. But the key to success hasn’t changed since the days of traditional, brick-and-mortar business. It’s still about service. A successful retailer today in omni-channel customer engagement must be able to deliver quality service that meets customer expectations. For many retailers, Oracle Web commerce applications help them achieve that success, allowing them to market, interact, and transact across multiple channels in a predictable, consistent, and personalized manner. Join us for this Webcast, and learn what Oracle applications can do for your business. In this session, we will discuss: The significance and dimensions of modern omni-channel customer experience The Oracle Commerce platform Real-world examples of business value derived by running customer-facing applications on Oracle Engineered Systems Register today Speakers: Sanjeev Sharma Principal Product Director, Oracle Exalogic, Oracle Kelly Goetsch Senior Principal Product Manager, Oracle Commerce, Oracle Dan Conway Senior Product Manager, Oracle Retail, Oracle

    Read the article

  • Obtaining In game Warcraft III data to program standalone AI

    - by Slav
    I am implementing common purpose behavioral algorithm and would like to test it under my lovely Warcraft III game and watch how it will fight against real players. The problem is how to obtain information about in game state (units, structures, environment, etc. ). Algorithm needs access to hard drive and possibly distributed computing, that's why JASS (WC3 Editor language) usage doesn't solve the issue. Direct 3D hooking is an approach, but it wasn't done for WC3 yet and significant drawback is inability to watch online at how AI performs since it uses the viewport to issue commands. How in game data can be obtained to a different process in a fastest and easiest way? Thank you.

    Read the article

  • problem in eclipse installation in Ubuntu 12.04

    - by Srijan
    Have tried more than dozen times to install Eclipse version available in the Ubuntu Software Center still to no avail. Every time I try to install it shows error i.e. unable to install. Have tried to install it using terminal as well with still the same results. After installation it refuses to run. What to do or any other packages I can use instead of Eclipse. Mainly trying to run java programs through eclipse. Thanks for the time. Am using Ubuntu 12.04 on x86-64 bit architecture.

    Read the article

< Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >