Search Results

Search found 48340 results on 1934 pages for 'microsoft test manager'.

Page 159/1934 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • implementing a download manager that supports resuming

    - by Idan K
    hi, I intend on writing a small download manager in C++ that supports resuming (and multiple connections per download). From the info I gathered so far, when sending the http request I need to add a header field with a key of "Range" and the value "bytes=startoff-endoff". Then the server returns a http response with the data between those offsets. So roughly what I have in mind is to split the file to the number of allowed connections per file and send a http request per splitted part with the appropriate "Range". So if I have a 4mb file and 4 allowed connections, I'd split the file to 4 and have 4 http requests going, each with the appropriate "Range" field. Implementing the resume feature would involve remembering which offsets are already downloaded and simply not request those. Is this the right way to do this? What if the web server doesn't support resuming? (my guess is it will ignore the "Range" and just send the entire file) When sending the http requests, should I specify in the range the entire splitted size? Or maybe ask smaller pieces, say 1024k per request? When reading the data, should I write it immediately to the file or do some kind of buffering? I guess it could be wasteful to write small chunks. Should I use a memory mapped file? If I remember correctly, it's recommended for frequent reads rather than writes (I could be wrong). Is it memory wise? What if I have several downloads simultaneously? If I'm not using a memory mapped file, should I open the file per allowed connection? Or when needing to write to the file simply seek? (if I did use a memory mapped file this would be really easy, since I could simply have several pointers). Note: I'll probably be using Qt, but this is a general question so I left code out of it.

    Read the article

  • How to get a fully transparent backbuffer in directx 9 without vista Desktop Window Manager

    - by flawlesslyfaulted
    I currently have an activex control that initiates a media (video/audio) framework another development group in my company developed and I am providing a window handle to that code. That handle is being used by their rendering plugin in the pipeline that uses Direct3d for rendering the video using that handle. I have seperate LPDIRECT3D9EX and LPDIRECT3DDEVICE9EX pointers that I initialize in my activex control. I am trying to clear a backbuffer to transparent and then use directx drawing primatives to draw on that backbuffer producing a transparent window with my drawing primatives over the streaming video on the directx surface below. It appears that clearing a device backbuffer with full alpha transparency is ignored by directx. d3ddev->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_RGBA(0, 0, 1, 0 /*full alpha*/), 1.0f, 0); I can see the object I draw but they are drawn on top of a backbuffer that has the RGB color specified without the alpha value. The project linked (http://www.codeproject.com/KB/directx/umvistad3d.aspx) to in the stackoverflow question below does what I want but requires vista's Desktop Window Manager and won't work for XP. http://stackoverflow.com/questions/148275/how-do-i-draw-transparent-directx-content-in-a-transparent-window I have tried with D3DRS_ALPHABLENDENABLE true with configured blend with no avail. I have also tried to have pixels with full alpha values not rendered using D3DRS_ALPHATESTENABLE, D3DRS_ALPHAREF, and D3DRS_ALPHAFUNC setup but this doesn't work either. I have tried using ColorFill with alpha after retrieving the backbuffer with GetBackBuffer but this doesn't work either. (again only RGB is used) Finally I have tried creating a texture, selecting a surface, colorfilling that surface with a fully transparent alpha value, then loading that surface onto the backbuffer but only the RGB values appear to be used. I have checked the capabilities using the DXCapsViewer.exe and the D3DFMT_A8R8G8B8 backbuffer format that I am using for the backbuffer is valid so it can't be that. Has anyone gotten a transparent backbuffer in directx to work in XP?

    Read the article

  • Form dependency manager javascript

    - by user225269
    Need help in form dependency manager javascript: As you can see the checkbox below depends on either of the two criteria (the info=student or the info=all). I've come up with the code below based on this: DEPENDS ON name [BEING value] [OR name [BEING value]] CONFLICTS WITH name [BEING value] from this site: http://www.dynamicdrive.com/dynamicindex16/formdependency.htm Here's the code: <tr> <td> <input type="hidden" name="yr"> <label> <input type="checkbox" name="yr" value="year" class="DEPENDS ON info BEING student OR info BEING all"> Year</label> </td> </tr> EDIT What do I do?It doesn't work. When I change this: DEPENDS ON info BEING student Into this: DEPENDS ON info BEING student OR info BEING all The checkbox which depends on the radio button to be clicked first before it shows up. Will show up before any of the radio buttons( student, all) will be clicked.

    Read the article

  • loading google maps drawing manager object

    - by psychok7
    So i am using Google Maps Drawing Manager to draw some polygons and i am saving the lat e long coordinates to my database. Now my question is, after i load that to my array, how can i rebuild the saved polygon back into my map? I can't seem to find a code to understand that. this is what i have now : window.initialize_2 = function () { var mapOptions = { center: new google.maps.LatLng(-34.397, 150.644), zoom: 8, mapTypeId: google.maps.MapTypeId.ROADMAP }; var map = maplimits; var drawingManager = new google.maps.drawing.DrawingManager({ drawingMode: google.maps.drawing.OverlayType.MARKER, drawingControl: true, drawingControlOptions: { position: google.maps.ControlPosition.TOP_CENTER, drawingModes: [ google.maps.drawing.OverlayType.POLYGON] }, markerOptions: { icon: 'images/beachflag.png' }, polygonOptions: { fillColor: '#ffff00', fillOpacity: 10, strokeWeight: 5, clickable: true, editable: true, zIndex: 1 } }); var coord_listener = google.maps.event.addListener(drawingManager, 'polygoncomplete', function (polygon) { var coordinates = (polygon.getPath().getArray()); console.log(coordinates); window.poly = polygon; }); //delete shape google.maps.event.addListener(drawingManager, 'overlaycomplete', function (e) { if (e.type != google.maps.drawing.OverlayType.MARKER) { // Switch back to non-drawing mode after drawing a shape. drawingManager.setDrawingMode(null); // Add an event listener that selects the newly-drawn shape when the user // mouses down on it. var newShape = e.overlay; newShape.type = e.type; google.maps.event.addListener(newShape, 'click', function () { setSelection(newShape); }); setSelection(newShape); } }); // Clear the current selection when the drawing mode is changed, or when the // map is clicked. google.maps.event.addListener(drawingManager, 'drawingmode_changed', clearSelection); google.maps.event.addListener(map, 'click', clearSelection); drawingManager.setMap(map); }

    Read the article

  • PHP package manager

    - by Mathias
    Hey, does anyone know a package manager library for PHP (as e.g. apt or yum for linux distros) apart from PEAR? I'm working on a system which should include a package management system for module management. I managed to get a working solution using PEAR, but using the PEAR client for anything else than managing a PEAR installation is not really the optimal solution as it's not designed for that. I would have to modify/extend it (e.g. to implement actions on installation/upgrade or to move PEAR specific files like lockfiles away from the system root) and especially the CLI client code is quite messy and PHP4. So maybe someone has some suggestions for an alternative PEAR client library which is easy to use and extend (the server side has some nice implementations like Pirum and pearhub) for completely different package management systems written in PHP (ideally including dependency tracking and different channels) for some general ideas how to implement such a PM system (yes, I'm still tinkering with the idea of implementing such a system from scratch) I know that big systems like Magento and symfony use PEAR for their PM. Magento uses a hacked version of the original PEAR client (which I'd like to avoid), symfony's implementation seems quite integrated with the framework, but would be a good starting point to at least write the client from scratch. Anyway, if anybody has suggestions: please :)

    Read the article

  • java.lang.IllegalArgumentException: View not attached to window manager

    - by alex2k8
    I have an activity that starts AsyncTask and shows progress dialog for the duration of operation. The activity is declared NOT be recreated by rotation or keyboard slide. <activity android:name=".MyActivity" android:label="@string/app_name" android:configChanges="keyboardHidden|orientation" > <intent-filter> </intent-filter> </activity> Once task completed, I dissmiss dialog, but on some phones (framework: 1.5, 1.6) such error is thrown: java.lang.IllegalArgumentException: View not attached to window manager at android.view.WindowManagerImpl.findViewLocked(WindowManagerImpl.java:356) at android.view.WindowManagerImpl.removeView(WindowManagerImpl.java:201) at android.view.Window$LocalWindowManager.removeView(Window.java:400) at android.app.Dialog.dismissDialog(Dialog.java:268) at android.app.Dialog.access$000(Dialog.java:69) at android.app.Dialog$1.run(Dialog.java:103) at android.app.Dialog.dismiss(Dialog.java:252) at xxx.onPostExecute(xxx$1.java:xxx) My code is: final Dialog dialog = new AlertDialog.Builder(context) .setTitle("Processing...") .setCancelable(true) .create(); final AsyncTask<MyParams, Object, MyResult> task = new AsyncTask<MyParams, Object, MyResult>() { @Override protected MyResult doInBackground(MyParams... params) { // Long operation goes here } @Override protected void onPostExecute(MyResult result) { dialog.dismiss(); onCompletion(result); } }; task.execute(...); dialog.setOnCancelListener(new OnCancelListener() { @Override public void onCancel(DialogInterface arg0) { task.cancel(false); } }); dialog.show(); From what I have read (http://bend-ing.blogspot.com/2008/11/properly-handle-progress-dialog-in.html) and seen in Android sources, it looks like the only possible situation to get that exception is when activity was destroyed. But as I have mentioned, I forbid activity recreation for basic events. So any suggestions are very appreciated.

    Read the article

  • Google App Engine error Object Manager has been closed

    - by newbie
    I had following error from Google App Engine when I was trying to iterate list in JSP page with EL. Object Manager has been closed I solved problem with following coe, but I don't think that it is very good solution to this problem: public List<Item> getItems() { PersistenceManager pm = getPersistenceManager(); Query query = pm.newQuery("select from " + Item.class.getName()); List<Item> items = (List<Items>) query.execute(); List<Item> items2 = new ArrayList<Item>(); // This line solved my problem Collections.copy(items, items2); // and this also pm.close(); return (List<Item>) items; } When I tried to use pm.detachCopyAll(items) it gave same error. I understood that detachCopyAll() method should do same what I did, but that method should be part of data nucelus, so it should be used instead of my owm methods. So why dosen't detachCopyAll() work at all?

    Read the article

  • Silverlight ViewBase in separate assembly - possible?

    - by Mark
    I have all my views in a project inheriting from a ViewBase class that inherits from UserControl. In my XAML I reference it thus: <f:ViewBase x:Class="Forte.UI.Modules.Configure.Views.AddNewEmployeeView" xmlns:f="clr-namespace:Forte.UI.Modules.Configure.Views" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" It works fine. Now I have moved the ViewBase to another project (so I can refernce it from multiple projects) so I reference it like: <f:ViewBase x:Class="Forte.UI.Modules.Configure.Views.AddNewEmployeeView" xmlns:f="clr-namespace:Forte.UI.Modules.Common.Views;assembly=Forte.UI.Modules.Common" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" This works fine when I run from the IDE but when I run the same sln from MSBuild it gives a warning: "H:\dev\ExternalCopy\Code\UI\Modules\Configure\Forte.UI.Modules.Configure.csproj" (default target) (10:12) - (ValidateXaml target) - H:\dev\ExternalCopy\Code\UI\Modules\Configure\Views\AddNewEmployee\AddNewEmployeeView.xaml(1,2,1,2): warning : The tag 'ViewBase' does not exist in XML namespace 'clr-namespace:Forte.UI.Modules.Common.Views;assembly=Forte.UI.Modules.Common'. Then fails with: "H:\dev\ExternalCopy\Code\UI\Modules\Configure\Forte.UI.Modules.Configure.csproj" (default target) (10:12) - (ValidateXaml target) - C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): error MSB4018: The "ValidateXaml" task failed unexpectedly.\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: System.NullReferenceException: Object reference not set to an instance of an object.\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at MS.MarkupCompiler.ValidationPass.ValidateXaml(String fileName, Assembly[] assemb lies, Assembly callingAssembly, TaskLoggingHelper log, Boolean shouldThrow)\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at Microsoft.Silverlight.Build.Tasks.ValidateXaml.XamlValidator.Execute()\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at Microsoft.Silverlight.Build.Tasks.ValidateXaml.XamlValidator.Execute()\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at Microsoft.Silverlight.Build.Tasks.ValidateXaml.Execute()\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at Microsoft.Build.BuildEngine.TaskEngine.ExecuteInstantiatedTask(EngineProxy engin eProxy, ItemBucket bucket, TaskExecutionMode howToExecuteTask, ITask task, Boolean& taskResult) Any ideas what might be causing this behaviour? Using Silverlight 3 Here is a cut down version of the MSBuild file that fails to build the sln that builds fine in the IDE (sorry couldn't get it to format here): <?xml version="1.0" encoding="utf-8" ?> <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="Compile"> <ItemGroup> <ProjectToBuild Include="..\UI\Forte.UI.sln"> <Properties>Configuration=Debug</Properties> </ProjectToBuild> </ItemGroup> <Target Name="Compile"> <MSBuild Projects="@(ProjectToBuild)"></MSBuild> </Target> </Project> Thanks for any help!

    Read the article

  • Add Your Gmail Account to Outlook 2010 Using IMAP

    - by Mysticgeek
    If you’re upgrading from Outlook 2003 to 2010, you might want to use IMAP with your Gmail account to synchronize mail across multiple machines. Using our guide, you will be able to start using it in no time. Enable IMAP in Gmail First log into your Gmail account and open the Settings panel. Click on the Forwarding and POP/IMAP tab and verify IMAP is enabled and save changes. Next open Outlook 2010, click on the File tab to access the Backstage view. Click on Account Settings and Add and remove accounts or change existing connection settings. In the Account Settings window click on the New button. Enter in your name, email address, and password twice then click Next. Outlook will configure the email server settings, the amount of time it takes will vary. Provided everything goes correctly, the configuration will be successful and you can begin using your account. Manually Configure IMAP Settings If the above instructions don’t work, then we’ll need to manually configure the settings. Again, go into Auto Account Setup and select Manually configure server settings or additional server types and click Next.   Select Internet E-mail – Connect to POP or IMAP server to send and receive e-mail messages. Now we need to manually enter in our settings similar to the following. Under the Server Information section verify the following. Account Type: IMAP Incoming mail server: imap.gmail.com Outgoing mail server (SMTP): smtp.gmail.com Note: If you have a Google Apps account make sure to put the full email address ([email protected]) in the Your Name and User Name fields. Note: If you live outside of the US you might need to use imap.googlemail.com and smtp.googlemail.com Next, we need to click on the More Settings button… In the Internet E-mail Settings screen that pops up, click on the Outgoing Server tab, and check the box next to My outgoing server (SMTP) requires authentication. Also select the radio button next to Use same settings as my incoming mail server. In the same window click on the Advanced tab and verify the following. Incoming server: 993 Incoming server encrypted connection: SSL Outgoing server encrypted connection TLS Outgoing server: 587 Note: You will need to change the Outgoing server encrypted connection first, otherwise it will default back to port 25. Also, if TLS doesn’t work, we were able to successfully use Auto. Click OK when finished. Now we want to test the settings, before continuing on…it’s just easier that way incase something was entered incorrectly. To make sure the settings are tested, check the box Test Account Settings by clicking the Next button. If you’ve entered everything in correctly, both tasks will be completed successfully and you can close out of the window. and begin using your account via Outlook 2010. You’ll get a final congratulations message you can close out of… And begin using your account via Outlook 2010. Conclusion Using IMAP allows you to synchronize email across multiple machines and devices. The IMAP feature in Gmail is free to use, and this should get you started using it with Outlook 2010. If you’re still using 2007 or just upgraded to it, check out our guide on how to use Gmail IMAP in Outlook 2007. Similar Articles Productive Geek Tips Add Your Gmail To Windows Live MailForce Outlook 2007 to Download Complete IMAP ItemsUse Gmail IMAP in Microsoft Outlook 2007Prevent Outlook with Gmail IMAP from Showing Duplicate Tasks in the To-Do BarSetting up Gmail IMAP Support for Windows Vista Mail TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam Boot Windows Faster With Boot Performance Diagnostics Create Ringtones For Your Android Phone With RingDroid Enhance Your Laptop’s Battery Life With These Tips Easily Search Food Recipes With Recipe Chimp

    Read the article

  • Integrate Google Docs with Outlook the Easy Way

    - by Matthew Guay
    Want to use Google Docs and Microsoft office together?  Here’s how you can use Harmony for Google Docs to integrates them seamlessly with Outlook. Harmony for Google Docs is an exciting new plugin for Outlook 2007 (a version for Outlook 2010 is in the works).  It lets you integrate your Google Docs account with Outlook via a sidebar.  From this, you can find any of your Google docs or upload a new document, and then you can open the document to view or edit it in Outlook. Getting Started Download Harmony for Google Docs (link below), and install as normal.  Make sure Outlook is closed before you run the install. Next time you open Outlook, the new Harmony sidebar will automatically open.  Enter your Google Account info, and click Sign In. Now, all of your Google Docs will show up in the sidebar. Double-click any file to open it in Outlook.  You may have to sign-in to Google Docs the first time you open a document. Here’s a Google Doc open in Outlook.  Notice that everything works, including full editing. All Google Docs features worked great in our tests except for the new drawings tool.  When we tried to insert a drawing, Outlook had a script error.  This was the only problem we had with Harmony, and could be due to an interaction between Google Drawings and Outlook’s rendering engine. Harmony makes it easy to find any file in your Google Docs account.  You can search for a file, or sort your files by type, recentness, and more. You can also easily add any document to Google Docs directly from Harmony.  You can drag and drop any document, including one attached to an email, to the Harmony sidebar, and it will directly upload to your Google Docs account. And, when you’re writing a new email or reply, click the Show Documents button to open the Harmony sidebar.  From here, you can add documents as usual and share it with email recipient. Conclusion We previously covered a similar app OffiSync which combines Google doc features with MS Office. However, Harmony makes it much easier to use Google Apps along with Outlook.  This gives you an easy and efficient way to collaborate on documents with coworkers, all without leaving Outlook.  And, if your company uses SharePoint instead of Google Docs, Harmony offers a SharePoint edition that integrates with Outlook just as easily! Link Download Harmony for Google Docs Similar Articles Productive Geek Tips How To Export Documents from Google Docs to Your ComputerView Your Google Calendar in Outlook 2007Sync Your Outlook and Google Calendar with Google Calendar SyncIntegrate Twitter With Microsoft OutlookSlacker Geek: Update Your Facebook Profile from Outlook TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams

    Read the article

  • How to Add Your Gmail Account to Outlook 2013 Using IMAP

    - by Lori Kaufman
    If you use Outlook to check and manage your email, you can easily use it to check your Gmail account as well. You can setup your Gmail account to allow you to synchronize email across multiple machines using email clients instead of a browser. We will show you how to use IMAP in your Gmail account so you can synchronize your Gmail account across multiple machines, and then how to add your Gmail account to Outlook 2013. To setup your Gmail account to use IMAP, sign in to your Gmail account and go to Mail. Click the Settings button in the upper, right corner of the window and select Settings from the drop-down menu. On the Settings screen, click Forwarding and POP/IMAP. Scroll down to the IMAP Access section and select Enable IMAP. Click Save Changes at the bottom of the screen. Close your browser and open Outlook. To begin adding your Gmail account, click the File tab. On the Account Information screen, click Add Account. On the Add Account dialog box, you can choose the E-mail Account option which automatically sets up your Gmail account in Outlook. To do this enter your name, email address, and the password for your Gmail account twice. Click Next. The progress of the setup displays. The automatic process may or may not work. If the automatic process fails, select Manual setup or additional server types, instead of E-mail Account, and click Next. On the Choose Service screen, select POP or IMAP and click Next. On the POP and IMAP Account Settings enter the User, Server, and Logon Information. For the Server Information, select IMAP from the Account Type drop-down list and enter the following for the incoming and outgoing server information: Incoming mail server: imap.googlemail.com Outgoing mail server (SMTP): smtp.googlemail.com Make sure you enter your full email address for the User Name and select Remember password if you want Outlook to automatically log you in when checking email. Click More Settings. On the Internet E-mail Settings dialog box, click the Outgoing Server tab. Select the My outgoing server (SMTP) requires authentication and make sure the Use same settings as my incoming mail server option is selected. While still in the Internet E-mail Settings dialog box, click the Advanced tab. Enter the following information: Incoming server: 993 Incoming server encrypted connection: SSL Outgoing server encrypted connection TLS Outgoing server: 587 NOTE: You need to select the type of encrypted connection for the outgoing server before entering 587 for the Outgoing server (SMTP) port number. If you enter the port number first, the port number will revert back to port 25 when you change the type of encrypted connection. Click OK to accept your changes and close the Internet E-mail Settings dialog box. Click Next. Outlook tests the accounts settings by logging into the incoming mail server and sending a test email message. When the test is finished, click Close. You should see a screen saying “You’re all set!”. Click Finish. Your Gmail address displays in the account list on the left with any other email addresses you have added to Outlook. Click the Inbox to see what’s in your Inbox in your Gmail account. Because you’re using IMAP in your Gmail account and you used IMAP to add the account to Outlook, the messages and folders in Outlook reflect what’s in your Gmail account. Any changes you make to folders and any time you move email messages among folders in Outlook, the same changes are made in your Gmail account, as you will see when you log into your Gmail account in a browser. This works the other way as well. Any changes you make to the structure of your account (folders, etc.) in a browser will be reflected the next time you log into your Gmail account in Outlook.     

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    "Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack. Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while. Self-Service BI Self-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI. This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me: PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.) Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.) One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.) Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.) Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.) This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users. It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations. Collaborative BI I have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time. Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people." The Microsoft BI Stack in General A question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years. Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?" Expo Hall I had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here. Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions. Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind! Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • 'Microsoft.Practices.EnterpriseLibrary.Caching.CacheFactory' threw an exception

    - by user281180
    Hi I`m having the error message: The type initializer for 'Microsoft.Practices.EnterpriseLibrary.Caching.CacheFactory' threw an exception. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.IO.FileNotFoundException: Could not load file or assembly 'Microsoft.Practices.ObjectBuilder2, Version=2.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified. Source Error: Line 30: private static ICacheManager GetCacheManager() Line 31: { Line 32: return CacheFactory.GetCacheManager(cacheManagerName); Line 33: } Line 34: } Source File: C:\Dev\DEV\HotHouse\HotHousetest3_rtmClone107\Code\MvcUI\State\PersistentCache.cs Line: 32 Assembly Load Trace: The following information can be helpful to determine why the assembly 'Microsoft.Practices.ObjectBuilder2, Version=2.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' could not be loaded. , whereas my colleages using the same dll, are not having the error message. Help please. I have Microsoft.Practices.EnterpriseLibrary.Caching and Microsoft.Practices.EnterpriseLibrary.Common as references both version 4.1.0.0 and runtime version v2.0.50727.

    Read the article

  • Flat File Connection Manager in SSIS package shows "Valid File Name Must be Selected"

    - by Traples
    (Flat File Location) Samba Share | Windows Share (SSIS) _______________________________ | | XP 32bit | Works | Works | | 2003 Serv 32bit | Works | Works | | Vista 64bit | ERROR | Works | | Win 7 64bit | ERROR | Works | | 2008 Serv 64bit | ERROR | Works I created an SSIS package in VS 2008 that parses a flat file from a shared folder and puts the records into a SQL Server db. I recently installed Windows 7 and VS 2008 on a new workstation. When I import the package from TFS and open it, I get the error Validation error. Parse and Import Catalog Flat File: MySSISPackage: The file name "\\shared\flatfile.txt" specified in the connection was not valid. When I open the Flat File Connection Manager Editor, there is an error stating: A valid file name must be selected I can browse to and select the file from inside the editor, but I cannot change any properties, or move away from the General tab because of this error. If I go back to my laptop (Windows XP), where the package was first created, there is no error. Both workstations are on the same domain, and I'm logging in using the same credentials. Any ideas as to why I would receive this error from one workstation and not another? UPDATE: If I take the .dtsx package from the running workstation and load it into SSIS on the server, I get the following errors when it tries to run: Error: The file name "\\shared\flatfile.txt" specified in the connection was not valid. and... Error: Connection "MySSISPackage" failed validation. and... Error: The file name property is not valid. The file name is a device or contains invalid characters. UPDATE 2: a) The Shared folder I'm trying to pull the flat file from is a Samba share on a Unix box. b) If I access the file using SSIS on any 64-bit platform (Windows 7 64-bit, Vista 64-bit, Windows Server 2008) I get the error "A valid file name must be selected." c) Accessing the file using SSIS from 32-bit environments (Windows XP 32-bit, Windows Server 2003 32-bit) there is no problem. d) If I move the file to a shared folder on a Windows server, 64-bit SSIS recognizes the file.

    Read the article

  • MSTest VS2010 - DeploymentItem copying files to different locations on different machines

    - by Jack
    I have found that DeploymentItem [TestClass(), DeploymentItem(@"TestData\")] is not copying my test data files to the same location when tests are built and run on different machines. The test data files are copied to the "bin\debug" directory in the test project on my machine, but on my friend's machine they are copied to "TestResults\*name_machine YY-MM-DD HH_MM_SS*\Out". The bin\debug directory on my machine can be obtained with the code: string appDirectory = Path.GetDirectoryNameSystem.Reflection.Assembly.GetExecutingAssembly().Location; and the same code will return "TestResults\*name_machine YY-MM-DD HH_MM_SS*\Out" on my friends PC. This however isn't really the problem. The problem is that the test data files I have made have a folder structure, and this folder structure is only maintained on my machine when copied to bin\debug, whereas on my friends machine only the files are added to the "TestResults\*name_machine YY-MM-DD HH_MM_SS*\Out" directory. This means that tests will pass on my machine and fail on his! Is there a way to ensure that DeploymentItem always copys to the bin\debug folder? Or a way to ensure that the folder structure will be retained when DeploymentItem copies the files to the "TestResults\*name_machine YY-MM-DD HH_MM_SS*\Out" folder?

    Read the article

  • Memory leak with Microsoft.JScript.Eval.JScriptEvaluate?

    - by Dmi
    I'm evaluating some javascript with Microsoft.JScript.Eval.JScriptEvaluate() and I noticed from my memory profiler that there are a large number of System.String left allocated by Microsoft.JScript.HashtableEntry. Microsoft.JScript.Eval is a static class, does anyone know what class is holding these instances and how I can clear them?

    Read the article

  • In the context of the TFS version control SDK (Microsoft.TeamFoundation.VersionControl), what exactl

    - by Frank Schwieterman
    In the context of the TFS version control SDK (Microsoft.TeamFoundation.VersionControl), what exactly is deletionID? It is a property of Microsoft.TeamFoundation.VersionControl.Client.Item. It is also a parameter to some of the query methods on Microsoft.TeamFoundation.VersionControl.Client. I'm trying to figure out exactly what it means, and how it might impact queries.

    Read the article

  • Workflow for academic research projects, one-step builds, and the Joel Test

    - by Steve
    Working alone on academic research sometimes breeds bad habits. With no one else reading my code, I would write a lot of throw-away code, and I would lose track of intermediate results which, weeks or months later, I wish I had retained. My recent attempts to make my personal workflow conform to the Joel Test raised interesting questions. Academic research has inherently different goals than industrial software development, and therefore some aspects of the Joel Test become less valid. Nevertheless, I find these steps to be still valuable for academic research: Do you use source control? Can you make a build in one step? Do you have an up-to-date schedule? Do you have a spec? Of particular use is the one-step build. I find myself more organized now that I have implemented the following "one-step build": In other words, I have a single script, build.py, that accepts Python code, data, and TeX as inputs. The outputs are results, figures, and a paper with all the results filled in. (Yes, I know "build" is probably not accurate in this context, but you get the idea.) By consolidating many small steps into one, I am not backtracking as much as I used to. ...but I'm sure there is still room for improvement. Question: For research projects, which steps of the Joel Test do you still value? Do you have a one-step build? If so, what does yours consist of, i.e., what inputs does it accept, and what output does it generate?

    Read the article

  • Microsoft Reporting DLL's in medium trust environment

    - by Linda
    My host Rackspace Cloud Sites have a modified Medium Trust environment. One of our legacy applications which we are moving onto the server uses the following DLL's: Microsoft.ReportViewer.Common.dll Microsoft.ReportViewer.ProcessingObjectModel.dll Microsoft.ReportViewer.WebForms.dll Microsoft.ReportViewer.WinForms.dll My understanding is that these DLL's work in a medium trust environment if deployed to the GAC. Sadly Rackspace will not do this for me. What options do I have apart from moving to a different plan? Deploying the DLL's to the bin does not work as the permissions are incorrect. Could I decompile the DLL's and make them work in a medium trust environment?

    Read the article

  • Error while dynamically loading mapi32.dll

    - by The_Fox
    Our application uses Simple MAPI to send e-mails. One of our clients has problems sending e-mail from a session on his terminal server. The mapi32.dll is loaded with a call to LoadLibrary which succeeds, but then our application tries to get the addresses of the functions MAPILogon, MAPILogOff, MAPISendMail, MAPIFreeBuffer and MAPIResolveName. The problem is that GetProcAddress fails for those functions with an ERROR_ACCESS_DENIED (code: 5) except for MAPIFreeBuffer. It looks like some sort of security thing. How can I fix this or should I use another method to send mail? FWI, here some more information about OS and contents of registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Messaging Subsystem: OS info: 5.2.3790 VER_PLATFORM_WIN32_NT Service Pack 2 Contents of SOFTWARE\Microsoft\Windows Messaging Subsystem InstallCmd: rundll32 setupapi,InstallHinfSection MSMAIL 132 msmail.inf MAPI: 1 CMCDLLNAME: mapi.dll CMCDLLNAME32: mapi32.dll CMC: 1 MAPIX: 1 MAPIXVER: 1.0.0.1 OLEMessaging: 1 Contents of SOFTWARE\Microsoft\Windows Messaging Subsystem\MSMapiApps inetsw95.exe: choosusr.dll: msab32.dll: nwab32.dll: outstore.dll: Microsoft Outlook CDOEXM.DLL: EMSMDB32.DLL: EMSABP32.DLL: newprof.exe: Microsoft Outlook outlook.exe: wfxmsrvr.exe: Microsoft Outlook msexcimc.exe: exchng32.exe: schdmapi.dll: Microsoft Outlook pilotcfg.exe: Microsoft Outlook mailmig.exe: Microsoft Outlook admin.exe: msspc32.dll: Microsoft Outlook cnfnot32.exe: Microsoft Outlook ilpilot.exe: Microsoft Outlook events.exe: I'm on Delphi 7.0, but that shouldn't matter. Edit, added version information: Fileversion info of C:\WINDOWS\system32\mapi32.dll Fileversion: 6.5.7226.0 FileDescription=Extended MAPI 1.0 for Windows NT CompanyName=Microsoft Corporation InternalName=MAPI32 Comments=Service Pack 1 LegalCopyRight=Copyright (C) 1986-2003 Microsoft Corp. All rights reserved. LegalTradeMarks=Microsoft(R) and Windows(R) are registered trademarks of Microsoft Corporation. OriginalFileName=MAPI32.DLL ProductName=Microsoft Exchange ProductVersion=6.5 Fileversion info of C:\Program Files\Common Files\SYSTEM\MSMAPI\1043\msmapi32.dll Fileversion: 11.0.5601.0 FileDescription=Extended MAPI 1.0 for Windows NT CompanyName=Microsoft Corporation InternalName=MAPI32.DLL LegalCopyRight=Copyright © 1995-2003 Microsoft Corporation. All rights reserved. OriginalFileName=MAPI32.DLL ProductName=MAPI32 ProductVersion=11.0.5601

    Read the article

  • Postfix/ClamAV not stopping viruses under Virtualmin

    - by Josh
    I am using Virtualmin and have it set up to have Postfix scan incoming emails with ClamAV (using clamdscan) and delete any emails which contain a virus. However when I email myself the EICAR test string, it comes through just fine. I know ClamAV will report this file as a virus. How can I troubleshoot this / what could be wrong?

    Read the article

  • Microsoft.SqlServer.SqlTools.VSIntegration reference problem/oddities in Visual Studio 2010

    - by Sung Meister
    SQL Server Edition: 2008 Enterprise Visual Studio: 2010 w/ .NET 4.0 SSMS 2008 Addin - Data Scripter project source code on CodePlex references Microsoft.SqlServer.SqlTools.VSIntegration.dll I have referenced the DLL under <<Microsoft SQL Server install location>>\100\Tools\Binn\VSShell\Common7\IDE But here is the oddity. Microsoft.SqlServer.SqlTools.VSIntegration.dll contains a namespace Microsoft.SqlServer.Management.UI.VSIntegration, which in turn contains ServiceCache (public sealed). As soon as I add the reference, ServiceCache is highlighted (meaning there is no reference issue) But the problem arises when I compile the project and VS 2010 throws up an error that it cannot find ServiceCache. The name 'ServiceCache' does not exist in the current context Why is that ServiceCache is not visible during compile time but looks like it's available right after adding the assembly? And Reflector does show that ServiceCache is part of the assembly that the project is referencing, but Visual Studio intellisense fails to display it. Any had this kind of problem? [UPDATE] Some screenshots Reflector clearly shows ServiceCache But Visual Studio 2010 says, otherwise...

    Read the article

  • How to test an HTTP 301 redirect?

    - by NoozNooz42
    How can one easily test HTTP return codes, like, say, a 301 redirect? For example, if I want to "see what's going on", I can use telnet to do something like this: ... $ telnet nytimes.com 80 Trying 199.239.136.200... Connected to nytimes.com. Escape character is '^]'. GET / HTTP/1.0 (enter) (enter) HTTP/1.1 200 OK Server: Sun-ONE-Web-Server/6.1 Date: Mon, 14 Jun 2010 12:18:04 GMT Content-type: text/html Set-cookie: RMID=007af83f42dd4c161dfcce7d; expires=Tuesday, 14-Jun-2011 12:18:04 GMT; path=/; domain=.nytimes.com Set-cookie: adxcs=-; path=/; domain=.nytimes.com Set-cookie: adxcs=-; path=/; domain=.nytimes.com Set-cookie: adxcs=-; path=/; domain=.nytimes.com Expires: Thu, 01 Dec 1994 16:00:00 GMT Cache-control: no-cache Pragma: no-cache Connection: close <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> ... Which is an easy way to access quite some infos. But now I want to test that a 301 redirect is indeed a 301 redirect. How can I do so? Basically, instead of getting a HTTP/1.1 200 OK I'd like to know how I can get the 301? I know that I can enter the name of the URL in a browser and "see" that I'm redirected, but I'd like to know what tool(s) can be used to actually really "see" the 301 redirect. Btw, I did test with a telnet, but when I enter www.example.org, which I redirected to example.org (without the www), all I can see is an "200 OK", I don't get to see the 301.

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >