Search Results

Search found 3274 results on 131 pages for 'ultimate guy'.

Page 115/131 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Proving What You are Worth

    - by Ted Henson
    Here is a challenge for everyone. Just about everyone has been asked to provide or calculate the Return on Investment (ROI), so I will assume everyone has a method they use. The problem with stopping once you have an ROI is that those in the C-Suite probably do not care about the ROI as much as Return on Equity (ROE). Shareholders are mostly concerned with their return on the money the invested. Warren Buffett looks at ROE when deciding whether to make a deal or not. This article will outline how you can add more meaning to your ROI and show how you can potentially enhance the ROE of the company.   First I want to start with a base definition I am using for ROI and ROE. Return on investment (ROI) and return on equity (ROE) are ways to measure management effectiveness, parts of a system of measures that also includes profit margins for profitability, price-to-earnings ratio for valuation, and various debt-to-equity ratios for financial strength. Without a set of evaluation metrics, a company's financial performance cannot be fully examined by investors. ROI and ROE calculate the rate of return on a specific investment and the equity capital respectively, assessing how efficient financial resources have been used. Typically, the best way to improve financial efficiency is to reduce production cost, so that will be the focus. Now that the challenge has been made and items have been defined, let’s go deeper. Most research about implementation stops short at system start-up and seldom addresses post-implementation issues. However, we know implementation is a continuous improvement effort, and continued efforts after system start-up will influence the ultimate success of a system.   Most UPK ROI’s I have seen only include the cost savings in developing the training material. Some will also include savings based on reduced Help Desk calls. Using just those values you get a good ROI. To get an ROE you need to go a little deeper. Typically, the best way to improve financial efficiency is to reduce production cost, which is the purpose of implementing/upgrading an enterprise application. Let’s assume the new system is up and running and all users have been properly trained and are comfortable using the system. You provide senior management with your ROI that justifies the original cost. What you want to do now is develop a good base value to a measure the current efficiency. Using usage tracking you can look for various patterns. For example, you may find that users that are accessing UPK assistance are processing a procedure, such as entering an order, 5 minutes faster than those that don’t.  You do some research and discover each minute saved in processing a claim saves the company one dollar. That translates to the company saving five dollars on every transaction. Assuming 100,000 transactions are performed a year, and all users improve their performance, the company will be saving $500,000 a year. That $500,000 can be re-invested, used to reduce debt or paid to the shareholders.   With continued refinement during the life cycle, you should be able to find ways to reduce cost. These are the type of numbers and productivity gains that senior management and shareholders want to see. Being able to quantify savings and increase productivity may also help when seeking a raise or promotion.

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Styling ASP.NET MVC Error Messages

    - by MightyZot
    Originally posted on: http://geekswithblogs.net/MightyZot/archive/2013/11/11/styling-asp.net-mvc-error-messages.aspxOff the cuff, it may look like you’re stuck with the presentation of your error messages (model errors) in ASP.NET MVC. That’s not the case, though. You actually have quite a number of options with regard to styling those boogers. Like many of the helpers in MVC, the Html.ValidationMessageFor helper has multiple prototypes. One of those prototypes lets you pass a dictionary, or anonymous object, representing attribute values for the resulting markup. @Html.ValidationMessageFor( m => Model.Whatever, null, new { @class = “my-error” }) By passing the htmlAttributes parameter, which is the last parameter in the call to the prototype of Html.ValidationMessageFor shown above, I can style the resulting markup by associating styles to the my-error css class.  When you run your MVC project and view the source, you’ll notice that MVC adds the class field-validation-valid or field-validation-error to a span created by the helper. You could actually just style those classes instead of adding your own…it’s really up to you. Now, what if you wanted to move that error message around? Maybe you want to put that error message in a box or a callout. How do you do that? When I first started using MVC, it didn’t occur to me that the Html.ValidationMessageFor helper just spits out a little bit of markup. I wanted to put the error messages in boxes with white backgrounds, our site originally had a black background, and show a little nib on the side to make them look like callouts or conversation bubbles. Not realizing how much freedom there is in the styling and markup, and after reading someone else’s post, I created my own version of the ValidationMessageFor helper that took out the span and replaced it with divs. I styled the divs to produce the effect of a popup box and had a lot of trouble with sizing and such. That’s a really silly and unnecessary way to solve this problem. If you want to move your error messages around, all you have to do is move the helper. MVC doesn’t appear to care where you put it, which makes total sense when you think about it. Html.ValidationMessageFor is just spitting out a little markup using a little bit of reflection on the name you’re passing it. All you’ve got to do to style it the way you want it is to put it in whatever markup you desire. Take a look at this, for example… <div class=”my-anchor”>@Html.ValidationMessageFor( m => Model.Whatever )</div> @Html.TextBoxFor(m => Model.Whatever) Now, given that bit of HTML, consider the following CSS… <style> .my-anchor { position:relative; } .field-validation-error {    background-color:white;    border-radius:4px;    border: solid 1px #333;    display: block;    position: absolute;    top:0; right:0; left:0;    text-align:right; } </style> The my-anchor class establishes an anchor for the absolutely positioned error message. Now you can move the error message wherever you want it relative to the anchor. Using css3, there are some other tricks. For example, you can use the :not(:empty) selector to select the span and apply styles based upon whether or not the span has text in it. Keep it simple, though. Moving your elements around using absolute positioning may cause you issues on devices with screens smaller than your standard laptop or PC. While looking for something else recently, I saw someone asking how to style the output for Html.ValidationSummary.  Html.ValidationSummery is the helper that will spit out a list of property errors, general model errors, or both. Html.ValidationSummary spits out fairly simple markup as well, so you can use the techniques described above with it also. The resulting markup is a <ul><li></li></ul> unordered list of error messages that carries the class validation-summary-errors In the forum question, the user was asking how to hide the error summary when there are no errors. Their errors were in a red box and they didn’t want to show an empty red box when there aren’t any errors. Obviously, you can use the css3 selectors to apply different styles to the list when it’s empty and when it’s not empty; however, that’s not support in all browsers. Well, it just so happens that the unordered list carries the style validation-summary-valid when the list is empty. While the div rendered by the Html.ValidationSummary helper renders a visible div, containing one invisible listitem, you can always just style the whole div with “display:none” when the validation-summary-valid class is applied and make it visible when the validation-summary-errors class is applied. Or, if you don’t like that solution, which I like quite well, you can also check the model state for errors with something like this… int errors = ViewData.ModelState.Sum(ms => ms.Value.Errors.Count); That’ll give you a count of the errors that have been added to ModelState. You can check that and conditionally include markup in your page if you want to. The choice is yours. Obviously, doing most everything you can with styles increases the flexibility of the presentation of your solution, so I recommend going that route when you can. That picture of the fat guy jumping has nothing to do with the article. That’s just a picture of me on the roof and I thought it was funny. Doesn’t every post need a picture?

    Read the article

  • Thinking Local, Regional and Global

    - by Apeksha Singh-Oracle
    The FIFA World Cup tournament is the biggest single-sport competition: it’s watched by about 1 billion people around the world. Every four years each national team’s manager is challenged to pull together a group players who ply their trade across the globe. For example, of the 23 members of Brazil’s national team, only four actually play for Brazilian teams, and the rest play in England, France, Germany, Spain, Italy and Ukraine. Each country’s national league, each team and each coach has a unique style. Getting all these “localized” players to work together successfully as one unit is no easy feat. In addition to $35 million in prize money, much is at stake – not least national pride and global bragging rights until the next World Cup in four years time. Achieving economic integration in the ASEAN region by 2015 is a bit like trying to create the next World Cup champion by 2018. The team comprises Brunei Darussalam, Cambodia, Indonesia, Lao PDR, Malaysia, Myanmar, Philippines, Singapore, Thailand and Vietnam. All have different languages, currencies, cultures and customs, rules and regulations. But if they can pull together as one unit, the opportunity is not only great for business and the economy, but it’s also a source of regional pride. BCG expects by 2020 the number of firms headquartered in Asia with revenue exceeding $1 billion will double to more than 5,000. Their trade in the region and with the world is forecast to increase to 37% of an estimated $37 trillion of global commerce by 2020 from 30% in 2010. Banks offering transactional banking services to the emerging market place need to prepare to repond to customer needs across the spectrum – MSMEs, SMEs, corporates and multi national corporations. Customers want innovative, differentiated, value added products and services that provide: • Pan regional operational independence while enabling single source of truth at a regional level • Regional connectivity and Cash & Liquidity  optimization • Enabling Consistent experience for their customers  by offering standardized products & services across all ASEAN countries • Multi-channel & self service capabilities / access to real-time information on liquidity and cash flows • Convergence of cash management with supply chain and trade finance While enabling the above to meet customer demands, the need for a comprehensive and robust credit management solution for effective regional banking operations is a must to manage risk. According to BCG, Asia-Pacific wholesale transaction-banking revenues are expected to triple to $139 billion by 2022 from $46 billion in 2012. To take advantage of the trend, banks will have to manage and maximize their own growth opportunities, compete on a broader scale, manage the complexity within the region and increase efficiency. They’ll also have to choose the right operating model and regional IT platform to offer: • Account Services • Cash & Liquidity Management • Trade Services & Supply Chain Financing • Payments • Securities services • Credit and Lending • Treasury services The core platform should be able to balance global needs and local nuances. Certain functions need to be performed at a regional level, while others need to be performed on a country level. Financial reporting and regulatory compliance are a case in point. The ASEAN Economic Community is in the final lap of its preparations for the ultimate challenge: becoming a formidable team in the global league. Meanwhile, transaction banks are designing their own hat trick: implementing a world-class IT platform, positioning themselves to repond to customer needs and establishing a foundation for revenue generation for years to come. Anand Ramachandran Senior Director, Global Banking Solutions Practice Oracle Financial Services Global Business Unit

    Read the article

  • What to leave when you're leaving

    - by BuckWoody
    There's already a post on this topic - sort of. I read this entry, where the author did a good job on a few steps, but I found that a few other tips might be useful, so if you want to check that one out and then this post, you might be able to put together your own plan for when you leave your job.  I once took over the system administrator (of which the Oracle and SQL Server servers were a part) at a mid-sized firm. The outgoing administrator had about a two- week-long scheduled overlap with me, but was angry at the company and told me "hey, I know this is going to be hard on you, but I want them to know how important I was. I'm not telling you where anything is or what the passwords are. Good luck!" He then quit that day. It took me about three days to find all of the servers and crack the passwords. Yes, the company tried to take legal action against the guy and all that, but he moved back to his home country and so largely got away with it. Obviously, this isn't the way to leave a job. Many of us have changed jobs in the past, and most of us try to be very professional about the transition to a new team, regardless of the feelings about a particular company. I've been treated badly at a firm, but that is no reason to leave a mess for someone else. So here's what you should put into place at a minimum before you go. Most of this is common sense - which of course isn't very common these days - and another good rule is just to ask yourself "what would I want to know"? The article I referenced at the top of this post focuses on a lot of documentation of the systems. I think that's fine, but in actuality, I really don't need that. Even with this kind of documentation, I still perform a full audit on the systems, so in the end I create my own system documentation. There are actually only four big items I need to know to get started with the systems: 1. Where is everything/everybody?The first thing I need to know is where all of the systems are. I mean not only the street address, but the closet or room, the rack number, the IU number in the rack, the SAN luns, all that. A picture here is worth a thousand words, which is why I really like Visio. It combines nice graphics, full text and all that. But use whatever you have to tell someone the physical locations of the boxes. Also, tell them the physical location of the folks in charge of those boxes (in case you aren't) or who share that responsibility. And by "where" in this case, I mean names and phones.  2. What do they do?For both the servers and the people, tell them what they do. If it's a database server, detail what each database does and what application goes to that, and who "owns" that application. In my mind, this is one of hte most important things a Data Professional needs to know. In the case of the other administrtors or co-owners, document each person's responsibilities.   3. What are the credentials?Logging on/in and gaining access to the buildings are things that the new Data Professional will need to do to successfully complete their job. This means service accounts, certificates, all of that. The first thing they should do, of course, is change the passwords on all that, but the first thing they need is the ability to do that!  4. What is out of the ordinary?This is the most tricky, and perhaps the next most important thing to know. Did you have to use a "special" driver for that video card on server X? Is the person that co-owns an application with you mentally unstable (like me) or have special needs, like "don't talk to Buck before he's had coffee. Nothing will make any sense"? Do you have service pack requirements for a specific setup? Write all that down. Anything that took you a day or longer to make work is probably a candidate here. This is my short list - anything you care to add? Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • 12.10 unable to install or even run from Live CD with nVidia GTX 580

    - by user99056
    I've used Ubuntu in the past (set up as web server, etc over in Iraq), so I'm not a 100% Linux Noob, however, I'm running into a brick wall here. I've got a machine I built when I got back to the US earlier this year, running Windows 7 Ultimate on it, and I've now got some free time and would like to transition over to Ubuntu full time. I've searched around in the forums, and there seems to be an issue with the nVidia graphics cards, so I've tried going to the EVGA site to see if I could find a new BIOS update for it and had no luck, so I'm back searching the forums here again and decided to just go ahead and post my question. My apologies if this is covered in another post and I was just unable to find it. I've found a few 'similar' posts, but nothing as bad as my issue. With the history aside, here is the actual detailed issue: I purchased a new SSD (Intel 520 SSD), arrived today, and I disconnect my old Windows 7 SSD. I had pre downloaded the ubuntu-12.10-desktop-amd64 earlier today and burned it to DVD. Upon inserting the Live CD into the computer and booting up, everything was fine up to the 'Run From Live CD' or 'Install Ubuntu Now' buttons. As I was sure I wanted to go ahead and make the switch, I selected the 'Install Now' from the right hand side. CD Spins up, black window pops up, and then the errors started: date/time GPU Lockup date/time Failed to idle channel 1 date/time PFIFO - playlist update failed date/time Failed to idle channel 2 date/time PFIFO - playlist update failed Thinking it might correct itself, I let it run and it would swap over to a GUI Screen that was locked up with major blurring/etc, then back to the command line with the errors. Eventually it said something along the lines of 'unknown status' and switched back to the GUI and froze. So, that's when I tried to see if I could find a BIOS upgrade for the nVidia GTX580 cards, and had no luck. So I thought, why not try to just run it from the Live CD and see if I can at least get a look at it, maybe if I could get it running try to do some sort of install from there and fix the driver issue. I rebooted, brought up the Live CD, and this time chose the left option / run from the CD. It brought me all the way in to the desktop, I saw my drives, the other icons, could move the mouse, etc for about 30 seconds and then it locked up completely. I've tried this a couple of times and get the same results every time. Hardware: Intel i7-3930K CPU @ 3.2GHz (12 CPUs) / MSI MS-7760 Motherboard / 32GB RAM / 2 x EVGA (nVidia) GeForce GTX 580 (4GB Ram each) So the question is: Is there any way to install 12.10 if you can't even get the Live CD to run (for more than 30 seconds)? My current hardware configuration is both of the GTX 580 cards have an SLI jumper on them, and I have 2 monitors on each card. (Ubuntu info obviously only shows on the main monitor from the failed installation and the attempt at running the Live CD). Perhaps opening the machine back up and removing the SLI Jumper and removing the other 3 monitors (so it only would have 1 video card with one monitor on it) would actually allow me to get 12.10 installed, then I could work on an nVidia Video Driver fix for the GTX 580, and then possibly hook up the other video card and monitors? Or is this something that they are currently aware of and may update with a future release in the next few days/weeks? Any thoughts or suggestions would be greatly appreciated, as I can't even try to fix the issue (assuming it is the nVidia drivers) if I can't even get it to install at all.

    Read the article

  • Reactive Extensions vs FileSystemWatcher

    - by Joel Mueller
    One of the things that has long bugged me about the FileSystemWatcher is the way it fires multiple events for a single logical change to a file. I know why it happens, but I don't want to have to care - I just want to reparse the file once, not 4-6 times in a row. Ideally, there would be an event that only fires when a given file is done changing, rather than every step along the way. Over the years I've come up with various solutions to this problem, of varying degrees of ugliness. I thought Reactive Extensions would be the ultimate solution, but there's something I'm not doing right, and I'm hoping someone can point out my mistake. I have an extension method: public static IObservable<IEvent<FileSystemEventArgs>> GetChanged(this FileSystemWatcher that) { return Observable.FromEvent<FileSystemEventArgs>(that, "Changed"); } Ultimately, I would like to get one event per filename, within a given time period - so that four events in a row with a single filename are reduced to one event, but I don't lose anything if multiple files are modified at the same time. BufferWithTime sounds like the ideal solution. var bufferedChange = watcher.GetChanged() .Select(e => e.EventArgs.FullPath) .BufferWithTime(TimeSpan.FromSeconds(1)) .Where(e => e.Count > 0) .Select(e => e.Distinct()); When I subscribe to this observable, a single change to a monitored file triggers my subscription method four times in a row, which rather defeats the purpose. If I remove the Distinct() call, I see that each of the four calls contains two identical events - so there is some buffering going on. Increasing the TimeSpan passed to BufferWithTime seems to have no effect - I went as high as 20 seconds without any change in behavior. This is my first foray into Rx, so I'm probably missing something obvious. Am I doing it wrong? Is there a better approach? Thanks for any suggestions...

    Read the article

  • ASP.NET websites under IIS 7.5 (Windows 7) running extremely slow

    - by emzero
    I've just installed Windows 7 x64 Ultimate on my desktop PC. I installed IIS, Visual Studio 2008, registered ASP.NET, etc. I have this ASP.NET 3.5 website I'm working on running EXTREMELY slow on this new IIS. On STA and PROD servers (Windows 2003 Server) and on my old XP/IIS 5.1 everything runs smoothly. A page which usually takes 1-2 seconds to load is taking 8 seconds!!! I saw this post on IIS forum. It says something about Vista/7 not pooling connections (just to let you know, the website is running locally but it's connecting to a SQL Server 2005 hosted on a remote server). It seems that it takes a while to "start loading" the page... I mean, I click refresh and it stays for several seconds "Waiting for localhost"... Then when it gets response it loads the whole page normally... I don't have a clue how to force Win7/IIS7.5 to pool database connections. EDIT: I've created a new empty ASP.NET web application to see if the problems happens too. The answer is no, it responds fast as it should with an empty default page. Maybe is something related to the DB connection. I will do a further test. It should be a way to fix it... EDIT 2: Debugging the app I noticed that the delay occurs AFTER the execution of .NET code (Page_Load, etc)... so the delay seems to be somewhere when IIS serves the page to the browser.

    Read the article

  • fatal error C1034: windows.h: no include path set

    - by nathan
    OS Windows Vista Ultimate trying to run a program called minimal.c when i type at command line C:\Users\nathan\Desktopcl minimal.c Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 14.00.50727.762 for 80x86 Copyright (C) Microsoft Corporation. All rights reserved. minimal.c minimal.c(5) : fatal error C1034: windows.h: no include path set i have set all the paths: C:\Users\nathan\Desktoppath PATH=C:\Program Files (x86)\Microsoft Visual Studio 8\VC\bin;C:\Windows\system3 ;C:\Windows;C:\Windows\System32\Wbem;C:\Program Files (x86)\ATI Technologies\AT .ACE\Core-Static;C:\Program Files\Intel\DMIX;c:\Program Files (x86)\Microsoft S L Server\100\Tools\Binn\;c:\Program Files (x86)\Microsoft SQL Server\100\DTS\Bi n\;C:\Program Files (x86)\QuickTime\QTSystem\;C:\Program Files (x86)\Java\jdk1. .0_13\bin;C:\Program Files (x86)\Autodesk\Backburner\;C:\Program Files (x86)\Co mon Files\Autodesk Shared\;C:\Program Files (x86)\Microsoft DirectX SDK (March 009)\Include;C:\Users\nathan\Desktop\glut-3.7.6-bin\glut-3.7.6-bin;C:\Program F les (x86)\Microsoft Visual Studio 8\Common7\IDE;C:\Program Files (x86)\Microsof Visual Studio 8\VC\PlatformSDK\Include;C:\Program Files (x86)\Microsoft Visual Studio 8\VC\PlatformSDK\Include\gl i have gone and made sure windows.h is in the directory im setting the path too. its in C:\Program Files (x86)\Microsoft Visual Studio 8\VC\PlatformSDK\Include. i have visual studio 2005 i have exhausted all possiblies any ideas

    Read the article

  • Can't run ASP.NET MVC 2 web app on IIS 7.5

    - by Portman
    I'm trying to run an ASP.NET MVC 2 web application under IIS on Windows 7, but I get a 403.14 error. Here are the steps to reproduce: Open Visual Studio 2010 Create a new ASP.NET MVC 2 project called MvcApplication1 Shift+F5 to run the app. You should see http://localhost:{random_port}/ and the page will render correctly. Click on MvcApplication1, and select "Properties". Go to the "Web" section. Select "Use Local IIS Web server" and create a virtual directory. Save. Shift+F5 to run the app. You should see http://localhost/MvcApplication1/ and an IIS error HTTP Error 403.14 - Forbidden The Web server is configured to not list the contents of this directory.. It's clear that for whatever reason, ASP.NET routing is not working correctly. Things I've already thought of and tried: Verified that all IIS features are enabled in "Turn Windows features on or off". Verified that the default website is configured to use .NET 4.0 Reassigned ASP.NET v4 scripmaps via aspnet_regiis -i in the v4.0.30319 directory. Here's the most amazing part - this is on a just-built machine. New copy of Windows 7 x64 Ultimate, clean install of Visual Studio 2010 Premium, no other websites and no other work performed. Anything else I can try?

    Read the article

  • Interpret Objective C scripts at runtime on iPhone?

    - by Brad Parks
    Is there anyway to load an objective c script at runtime, and run it against the classes/methods/objects/functions in the current iPhone app? The reason i ask is that I've been playing around with iPhone wax, a lua interpreter that can be embedded in an iPhone app, and it works very nicely, in the sense that any object/method/function that's publically available in your Objective C code is automatically bridged, and available in lua. This allows you to rapidly prototype applications by simply making the core of your app be lua files that are in the users documents directory. Just reload the app, and you can test out changes to your lua files without needing to rebuild the app in XCode - a big time saver! But, with Apples recent 3.1.3 SDK stuff, it got me thinking that the safest approach for doing this type of rapid prototypeing would be if you could use Objective C as the interpreted code... That way, worst case scenario, you could just compile it into your app before your release, instead. I have heard that the lua source can be compiled to byte code, and linked in at build time, but I think the ultimate safe thing would be if the scripted source was in objective c, not lua. This leads me to wondering (i've searched, but come up with nothing) if there are any examples on how to embed an Objective C Interpreter in an iPhone app? This would allow you to rapidly prototype your app against the current classes that are built into your binary, and, when your about to deploy your app, instead of running the classes through the in app interpreter, you compile them in instead.

    Read the article

  • Parse html and find data in the html

    - by Dan.StackOverflow
    Hi all. I am trying to use html5lib to parse an html page in to something I can query with xpath. html5lib has close to zero documentation and I've spent too much time trying to figure this problem out. Ultimate goal is to pull out the second row of a table: <html> <table> <tr><td>Header</td></tr> <tr><td>Want This</td></tr> </table> </html> so lets try it: >>> doc = html5lib.parse('<html><table><tr><td>Header</td></tr><tr><td>Want This</td> </tr></table></html>', treebuilder='lxml') >>> doc <lxml.etree._ElementTree object at 0x1a1c290> that looks good, lets see what else we have: >>> root = doc.getroot() >>> print(lxml.etree.tostring(root)) <html:html xmlns:html="http://www.w3.org/1999/xhtml"><html:head/><html:body><html:table><html:tbody><html:tr><html:td>Header</html:td></html:tr><html:tr><html:td>Want This</html:td></html:tr></html:tbody></html:table></html:body></html:html> LOL WUT? seriously. I was planning on using some xpath to get at the data I want, but that doesn't seem to work. So what can I do? I am willing to try different libraries and approaches.

    Read the article

  • Using DirectX Effect11 with Visual Studio 2012

    - by l3utterfly
    I recently updated to Visual Studio 2012 Ultimate. I was programming previously with DirectX 11 June 2010 SDK and want to continue to do so using Visual Studio 2012. However, I discovered that VS2012 comes with its own DirectX SDK (in Windows Kit 8.0) and I've been trying to migrate my code using the newer versions of d3d11. Everything went fine until I try to use effect files in my project (.fx files). I had to compile the Effects11 Sample in the DirectX SDK using VS2012 and link the lib file in my project. That went fine too. However, when I compile my project the function D3DX11CreateEffectFromMemory returns a E_NOINTERFACE error (no such interface is supported). Can anyone tell me why is that? Note that I'm using the d3d11.lib from the Windows Kit and the d3dx11.lib from the DirectX SDK. Perhaps I shouldn't mix them? However, everything else works fine when I mix them, except for the effect file creation. Any help would be appreciated. P.S. I don't know if this is helpful but just so you know, if I add an additional library directory in the project settings of "DirectXSDKInstallPath\lib\x86\" it works. Why is that? Does it mean I'm using the older version of the libraries? This will give a ton of warnings about redefined headers in winerror.h

    Read the article

  • "Error generating Win32 resource" in Visual Studio, Windows 7 x64

    - by Jerad Rose
    My co-developers and I recently upgraded machines to Windows 7 Ultimate 64 bit. Some of us are seeing a new error we used to never see when building solutions in Visual Studio (happens in both 2008 and 2010): Error generating Win32 resource: The process cannot access the file because it is being used by another process. It always points to some temp file in our output folder, for example: MyProject\obj\Debug\CSC5123.tmp This happens about once every four or so builds. We then will try to run the same exact build again, and it will usually succeed. In some cases though, it will fail again on the same project, and in same cases, it will fail on a different project. There's really no rhyme or reason to it. But it's very frustrating, especially when it doesn't happen until the build has been running for 20 or so seconds. This also doesn't happen to all of our coworkers. It happens to about one out of four developers. For the one, it happens about one of four builds, and for the other three, it never happens. Oh, and did I mention we're all using machines built from the same image? :) Thanks in advance for any direction you can provide.

    Read the article

  • Putting a versioned-but-not-via-source control project in source control

    - by Emilio
    I have some old code (an old but still maintained VB6 application) that from a source control point of view is the ultimate example of the plumber's plumbing (or cobbler's shoes). It's been version controlled by the approach of making a new directory for each version. Are there any major downsides to taking the following approach? Do the initial check-in of all files Erase all files from the working directory, then copy all files from the next version to the working directory Check them in Goto #2 until done Note that I have a general change log text file which I'd grab the comments from for each version I check in/commit. I don't have (or really care about at this point) comments on a per-file- basis. I don't really know at this point what files have changed between versions, and being lazy I figured I could avoid doing file compares between versions to find out, so that's why I'm taking the approach above. Not to mention that erasing all the files first allows file deletions to be detected. I specifically haven't mentioned which version control tool I'm using since I'm hoping (also assuming, but maybe very incorrectly) that the answer is fairly independent. When I use terms like "check-in" I use them in the general sense, not specific to a tool.

    Read the article

  • C# Active Directory - Check username / password

    - by Michael G
    I'm using the following code on Windows Vista Ultimate SP1 to query our active directory server to check the user name and password of a user on a domain. public Object IsAuthenticated() { String domainAndUsername = strDomain + @"\" + strUser; DirectoryEntry entry = new DirectoryEntry(_path, domainAndUsername, strPass); SearchResult result; try { //Bind to the native AdsObject to force authentication. DirectorySearcher search = new DirectorySearcher(entry) { Filter = ("(SAMAccountName=" + strUser + ")") }; search.PropertiesToLoad.Add("givenName"); // First Name search.PropertiesToLoad.Add("sn"); // Last Name search.PropertiesToLoad.Add("cn"); // Last Name result = search.FindOne(); if (null == result) { return null; } //Update the new path to the user in the directory. _path = result.Path; _filterAttribute = (String)result.Properties["cn"][0]; } catch (Exception ex) { return new Exception("Error authenticating user. " + ex.Message); } return user; } the target is using .NET 3.5, and compiled with VS 2008 standard I'm logged in under a domain account that is a domain admin where the application is running. The code works perfectly on windows XP; but i get the following exception when running it on Vista: System.DirectoryServices.DirectoryServicesCOMException (0x8007052E): Logon failure: unknown user name or bad password. at System.DirectoryServices.DirectoryEntry.Bind(Boolean throwIfFail) at System.DirectoryServices.DirectoryEntry.Bind() at System.DirectoryServices.DirectoryEntry.get_AdsObject() at System.DirectoryServices.DirectorySearcher.FindAll(Boolean findMoreThanOne) at System.DirectoryServices.DirectorySearcher.FindOne() at Chain_Of_Custody.Classes.Authentication.LdapAuthentication.IsAuthenticated() at System.DirectoryServices.DirectoryEntry.Bind(Boolean throwIfFail) at System.DirectoryServices.DirectoryEntry.Bind() at System.DirectoryServices.DirectoryEntry.get_AdsObject() at System.DirectoryServices.DirectorySearcher.FindAll(Boolean findMoreThanOne) at System.DirectoryServices.DirectorySearcher.FindOne() at Chain_Of_Custody.Classes.Authentication.LdapAuthentication.IsAuthenticated() I've tried changing the authentication types, I'm not sure what's going on. See also: http://stackoverflow.com/questions/290548/c-validate-a-username-and-password-against-active-directory

    Read the article

  • DomainContext class is not created in a RIA services class library solution

    - by Konamiman
    I have created a RIA services class library project and things are not going as I expected. The problem is that when I add domain service classes to the server project, the corresponding domain context classes are not generated on the client project. I start by creating a new project of type WCF RIA services class library. The generated solution has two projects: RIAServicesLibrary1 (the Silverlight class library project) and RIAServicesLibrary1.Web (the class library that will hold the services). Then I add a new project item to RIAServicesLibrary1.Web of type DomainServiceClass. I add a sample method so that the resulting class code is: namespace RIAServicesLibrary1.Web { using System; using System.Collections.Generic; using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Linq; using System.ServiceModel.DomainServices.Hosting; using System.ServiceModel.DomainServices.Server; // TODO: Create methods containing your application logic. [EnableClientAccess()] public class DomainService1 : DomainService { [Invoke] void DoSomething() { } } } Then I generate the whole solution and... nothing happens on the client project. The Generated_Code folder is empty, and the domain context object for the service is not here. Funny enough, if I add a new item of type Authentication domain service, it works as expected: the file RIAServicesLibrary1.Web.g.cs is created on the client project, containing the AuthenticationDomainService1 class as expected. So what's happening here? Am I doing something wrong? Note: I am using Visual Studio 2010 Ultimate RTM and WCF RIA services 1.0.

    Read the article

  • Unable to run Ajax Minifier as post-build in Visual Studio.

    - by James South
    I've set up my post build config as demonstrated at http://www.asp.net/ajaxlibrary/ajaxminquickstart.ashx I'm getting the following error though: The "JsSourceFiles" parameter is not supported by the "AjaxMin" task. Verify the parameter exists on the task, and it is a settable public instance property. My configuration settings...... <Import Project="$(MSBuildExtensionsPath)\Microsoft\MicrosoftAjax\ajaxmin.tasks" /> <Target Name="AfterBuild"> <ItemGroup> <JS Include="**\*.js" Exclude="**\*.min.js" /> </ItemGroup> <ItemGroup> <CSS Include="**\*.css" Exclude="**\*.min.css" /> </ItemGroup> <AjaxMin JsSourceFiles="@(JS)" JsSourceExtensionPattern="\.js$" JsTargetExtension=".min.js" CssSourceFiles="@(CSS)" CssSourceExtensionPattern="\.css$" CssTargetExtension=".min.css" /> </Target> I had a look at the AjaxMinTask.dll with reflector and noted that the publicly exposed properties do not match the ones in my config. There is an array of ITaskItem called SourceFiles though so I edited my configuration to match. <Import Project="$(MSBuildExtensionsPath)\Microsoft\MicrosoftAjax\ajaxmin.tasks" /> <Target Name="AfterBuild"> <ItemGroup> <JS Include="**\*.js" Exclude="**\*.min.js" /> </ItemGroup> <ItemGroup> <CSS Include="**\*.css" Exclude="**\*.min.css" /> </ItemGroup> <AjaxMin SourceFiles="@(JS);@(CSS)" SourceExtensionPattern="\.js$;\.css$" TargetExtension=".min.js;.min.css"/> </Target> I now get the error: The "SourceFiles" parameter is not supported by the "AjaxMin" task. Verify the parameter exists on the task, and it is a settable public instance property. I'm scratching my head now. Surely it should be easier than this? I'm running Visual Studio 2010 Ultimate on a Windows 7 64 bit installation.

    Read the article

  • Beginner video capture and processing/Camera selection

    - by mattbauch
    I'll soon be undertaking a research project in real-time event recognition but have no experience with the programming aspect of video capture (I'm an upperclassman undergraduate in computer engineering). I want to start off on the right foot so advice from anyone with experience would be great. The ultimate goal is to track events such as a person standing up/sitting down, entering/leaving a room, possibly even shrugging/slumping in posture, etc. from a security camera-like vantage point. First of all, which cameras/companies would you recommend? I'm looking to spend ~$100, more if necessary but not much. Great resolution isn't a must, but is desirable if affordable. What about IP network cameras vs. a USB type webcam? Webcams are less expensive, but IP cameras seem like they'd be much less work to deal with in software. What features should I look for in the camera? Once I've selected a camera, what does converting its output to a series of RGB bitmaps entail? I've never dealt with video encoding/decoding so a starting point or a tutorial that will guide me up to this point would be great if anyone has suggestions. Finally, what is the best (least complicated/most efficient) way to display video from the camera plus my own superimposed images (boxes around events in progress, for instance) in a GUI application? I can work on any operating system in any language. I have some experience with win32 GUIs and Java GUIs. The focus of the project is on the algorithm and so I'm trying to get the video capture/display portion of the app done cleanly and quickly. Thanks for any responses!!

    Read the article

  • htaccess mod_rewrite check file/directory existence, else rewrite?

    - by devians
    I have a very heavy htaccess mod_rewrite file that runs my application. As we sometimes take over legacy websites, I sometimes need to support old urls to old files, where my application processes everything post htaccess. My ultimate goal is to have a 'Demilitarized Zone' for old file structures, and use mod rewrite to check for existence there before pushing to the application. This is pretty easy to do with files, by using: RewriteCond %{IS_SUBREQ} true RewriteRule .* - [L] RewriteCond %{ENV:REDIRECT_STATUS} 200 RewriteRule .* - [L] RewriteCond Public/DMZ/$1 -F [OR] RewriteRule ^(.*)$ Public/DMZ/$1 [QSA,L] This allows pseudo support for relative urls by not hardcoding my base path (I cant assume I will ever be deployed in document root) anywhere and using subrequests to check for file existence. Works fine if you know the file name, ie http://domain.com/path/to/app/legacyfolder/index.html However, my legacy urls are typically http://domain.com/path/to/app/legacyfolder/ Mod_Rewrite will allow me to check for this by using -d, but it needs the complete path to the directory, ie RewriteCond Public/DMZ/$1 -F [OR] RewriteCond /var/www/path/to/app/Public/DMZ/$1 -d RewriteRule ^(.*)$ Public/DMZ/$1 [QSA,L] I want to avoid the hardcoded base path. I can see one possible solutions here, somehow determining my path and attaching it to a variable [E=name:var] and using it in the condition. Another option is using -U, but the tricky part is stopping it from hijacking every other request when they should flow through, since -U is really easy to satisfy. Any implementation that allows me to existence check a directory is more than welcome. I am not interested in using RewriteBase, as that requires my htaccess to have a hardcoded base path.

    Read the article

  • Apache unresponsive on Vista [closed]

    - by William Hudson
    I had been running Apache on Vista for around a year, but recently upgraded my workstation. I did a clean install of Vista Ultimate and installed the latest version of the Apache server for win32 (2.2.11, no SSL). The service runs fine and there were no errors reported during the install, nor are there any errors in the Apache logs. However, any attempt to access the web site on localhost (or 127.0.0.1) just hangs the browser. I have used netstat to check who is listening to port 80 and it shows httpd.exe. I have also tried adjusting the .conf file to use port 8080 but this had no effect either (except to change the netstat output). This is a development system with quite a few other pieces of software installed. However, when I tried installing IIS, it worked fine (I removed it soon after before reattempting the Apache install). Using the older 2.0 version of Apache has no effect. Windows firewall is not running. I have disabled my NOD32 anti-virus. Any ideas what is going on? Regards, William

    Read the article

  • iPhone - Failed to save the videos metadata to the filesystem.

    - by cameron
    My application uses the UIImagePicker to allow the user to use the camera and capture a photo to edit/etc. I am getting the error message below: 2010-02-03 10:41:24.018 LivingRoom[5333:5303] Failed to save the videos metadata to the filesystem. Maybe the information did not conform to a plist. Program received signal: “EXC_BAD_ACCESS”. A search in Google brings up a number of threads in various forums, with no ultimate response/root cause/suggestions on how to fix/debug. An example is the thread below with code which is very similar to my app: http://groups.google.com/group/iphonesdkdevelopment/browse_thread/thread/6b7b396c62bef398 The error disappears for a while (10 tests in a row, no errors) if I reboot the iPhone. I have not been able to determine what makes it reoccur after a reboot, but it does. I am not using the video source and the fact that a reboot solves the problem for a while points to some sort of mem leak (perhaps?). The problem always shows up on both the iPhone (even after the reboot) and the simulator when choosing a photo from the album, but the app does not crash on the iPhone or the simulator. The same app with exact code did not have the error message when compiled using SDK 3.0 (last August/September). But 3.1.x has always produced the error message, which means that once a week or so the iPhone needs to be rebooted for the error to disappear. The users are not happy with that solution any longer!! Any suggestion/clues would be greatly appreciated.

    Read the article

  • Why Does the iPad Main View Refuse to go FullScreen?

    - by dugla
    I am doing an imaging app for iPad and it requires use of the entire screen. The approach I have used on iPhone does not appear to work on iPad. In Interface Builder I have set the UIToolbar to translucent.This code echos the dimensions of the main view before and after requesting fullscreen. (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { [self.window addSubview:self.viewController.view]; NSLog(@"Hello Popover AD - application did Finish Launching With Options - viewSize: %f %f BEFORE", self.viewController.view.bounds.size.width, self.viewController.view.bounds.size.height); [self.viewController setWantsFullScreenLayout:YES]; [self.viewController.view layoutIfNeeded]; NSLog(@"Hello Popover AD - application did Finish Launching With Options - viewSize: %f %f AFTER", self.viewController.view.bounds.size.width, self.viewController.view.bounds.size.height); [self.window makeKeyAndVisible]; return YES; } This is what NSlog has to say: Hello Popover AD - application did Finish Launching With Options - viewSize: 768 1004 BEFORE Hello Popover AD - application did Finish Launching With Options - viewSize: 768 1004 AFTER Can someone please tell me what I am doing incorrectly here? Note, on iPhone I set fullscreen within the init method of the relevant ViewController. Can view resizing only be done in a ViewController? My ultimate goal is a fullscreen view nicely tucked underneath a translucent status bar and tool bar. I will retract the status/tool bars when user interaction begins in the main view. Thanks, Doug

    Read the article

  • SUA + Visual Studio + pthreads

    - by vasek7
    Hi, I cannot compile this code under SUA: #include <unistd.h> #include <stdio.h> #include <stdlib.h> #include <pthread.h> void * thread_function(void *arg) { printf("thread_function started. Arg was %s\n", (char *)arg); // pause for 3 seconds sleep(3); // exit and return a message to another thread // that may be waiting for us to finish pthread_exit ("thread one all done"); } int main() { int res; pthread_t a_thread; void *thread_result; // create a thread that starts to run ‘thread_function’ pthread_create (&a_thread, NULL, thread_function, (void*)"thread one"); printf("Waiting for thread to finish...\n"); // now wait for new thread to finish // and get any returned message in ‘thread_result’ pthread_join(a_thread, &thread_result); printf("Thread joined, it returned %s\n", (char *)thread_result); exit(0); } I'm running on Windows 7 Ultimate x64 with Visual Studio 2008 and 2010 and I have installed: Windows Subsystem for UNIX Utilities and SDK for Subsystem for UNIX-based Applications in Microsoft Windows 7 and Windows Server 2008 R2 Include directories property of Visual Studio project is set to "C:\Windows\SUA\usr\include" What I have to configure in order to compile and run (and possibly debug) pthreads programs in Visual Studio 2010 (or 2008)?

    Read the article

  • PowerShell: How to find and uninstall a MS Office Update

    - by Hank
    I've been hunting for a clean way to uninstall an MSOffice security update on a large number of workstations. I've found some awkward solutions, but nothing as clean or general like using PowerShell and get-wmiobject with Win32_QuickFixEngineering and the .Uninstall method on the resulting object. [Apparently, Win32_QuickFixEngineering only refers to Windows patches. See: http://social.technet.microsoft.com/Forums/en/winserverpowershell/thread/93cc0731-5a99-4698-b1d4-8476b3140aa3 ] Question 1: Is there no way to use get-wmiobject to find MSOffice updates? There are so many classes and namespaces, I have to wonder. This particualar Office update (KB978382) can be found in the registry here (for Office Ultimate): HKLM\Software\Microsoft\Windows\CurrentVersion\Uninstall\{91120000-002E-0000-0000-0000000FF1CE}_ULTIMATER_{6DE3DABF-0203-426B-B330-7287D1003E86} which kindly shows the uninstall command of: msiexec /package {91120000-002E-0000-0000-0000000FF1CE} /uninstall {6DE3DABF-0203-426B-B330-7287D1003E86} and the last GUID seems constant between different versions of Office. I've also found the update like this: $wu = new-object -com "Microsoft.Update.Searcher" $wu.QueryHistory(0,$wu.GetTotalHistoryCount()) | where {$_.Title -match "KB978382"} I like this search because it doesn't require any poking around in the registry, but: Question 2: If I've found it like this, what can I do with the found information to facilitate the Uninstall? Thanks

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >