Search Results

Search found 9026 results on 362 pages for 'vs extensibility'.

Page 317/362 | < Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >

  • Getting Android SDK WebView and TabWidget to play nice

    - by jdandrea
    I’m taking the HelloTabWidget Android example and trying two things: Moving the tabs to the bottom vs. the top (if that’s even desirable from an Android UI POV) Making each tab show a particular WebView in the space above I’ve got this for a layout (high level): <TabHost> <LinearLayout> <FrameLayout> <WebView/> <WebView/> <WebView/> <WebView/> <WebView/> </FrameLayout> <TabWidget/> </LinearLayout> </TabHost> Everything has a width/height set to fill_parent except for the TabWidget which has its layout_height set to wrap_content (and the layout_gravity set to bottom). First thing I noticed is that WebViews don’t show anything until all the parents have width/height set to fill_parent. However, once I do that, they fill the entire display, obscuring the TabWidget. Is there some other trick to making these two views play nicely together?

    Read the article

  • .NET projects build automation with NAnt/MSBuild + SVN

    - by petr k.
    Hi everyone, for quite a while now, I've been trying to figure out how to setup an automated build process at our shop. I've read many posts and guides on this matter and none of them really fits my specifics needs. My SVN repository is laid out as follows \projects \projectA (a product) \tags \1.0.0.1 \1.0.0.2 ... \trunk \src \proj1 (a VS C# project) \proj2 \documentation Then I have a network share, with a folder for each project (product), which in turn contains the binaries, written documentation and the generated API documentation (via NDoc - each project may have an .ndoc file in the repository) for every historical version (from the tags SVN folder) and for the latest version as well (from the trunk). Basically, what I want to do in a scheduled batch build are these steps: examine the project's SVN folder and identify tags not present in the network share for each of these tags check out the tag folder build (with Release config) copy the resulting binaries to the network share search for .ndoc files generate CHM files via NDoc copy the resulting CHM files to the network share do the same as in 2., but for the HEAD revision of trunk Now, the trouble is, I have no idea where to start. I do not keep .sln files in the repository, but I am able to replace these with MSBuild files which in turn build the C# projects belonging to the specific product. I guess the most troubling part is the examination of the repository for tags which have not been processed yet - i.e. searching the tags and comparing them to a project's directory structure on the network share. I have no idea how to do that in any of the build tools (NAnt, MSBuild). Could you please provide me with some pointers on how to approach this task as a whole and in detail as well? I do not care if I use NAnt, MSBuild, or both. I am aware that this might be rather complex, but every idea and NAnt/MSBuild snippet will be a great help. Thanks in advance.

    Read the article

  • C# File IO with Streams - Best Memory Buffer Size

    - by AJ
    Hi, I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance). Thanks in advance for any ideas, Adam

    Read the article

  • Why is "Fixup" needed for Persistence Ignorant POCO's in EF 4?

    - by Eric J.
    One of the much-anticipated features of Entity Framework 4 is the ability to use POCO (Plain Old CLR Objects) in a Persistence Ignorant manner (i.e. they don't "know" that they are being persisted with Entity Framework vs. some other mechanism). I'm trying to wrap my head around why it's necessary to perform association fixups and use FixupCollection in my "plain" business object. That requirement seems to imply that the business object can't be completely ignorant of the persistence mechanism after all (in fact the word "fixup" sounds like something needs to be fixed/altered to work with the chosen persistence mechanism). Specifically I'm referring to the Association Fixup region that's generated by the ADO.NET POCO Entity Generator, e.g.: #region Association Fixup private void FixupImportFile(ImportFile previousValue) { if (previousValue != null && previousValue.Participants.Contains(this)) { previousValue.Participants.Remove(this); } if (ImportFile != null) { if (!ImportFile.Participants.Contains(this)) { ImportFile.Participants.Add(this); } if (ImportFileId != ImportFile.Id) { ImportFileId = ImportFile.Id; } } } #endregion as well as the use of FixupCollection. Other common persistence-ignorant ORMs don't have similar restrictions. Is this due to fundamental design decisions in EF? Is some level of non-ignorance here to stay even in later versions of EF? Is there a clever way to hide this persistence dependency from the POCO developer? How does this work out in practice, end-to-end? For example, I understand support was only recently added for ObservableCollection (which is needed for Silverlight and WPF). Are there gotchas in other software layers from the design requirements of EF-compatible POCO objects?

    Read the article

  • How do I encapsulate form/post/validation[/redirect] in ViewUserControl in ASP.Net MVC 2

    - by paul
    What I am trying to achieve: encapsulate a Login (or any) Form to be reused across site post to self when Login/validation fails, show original page with Validation Summary (some might argue to just post to Login Page and show Validation Summary there; if what I'm trying to achieve isn't possible, I will just go that route) when Login succeeds, redirect to /App/Home/Index also, want to: stick to PRG principles avoid ajax keep Login Form (UserController.Login()) as encapsulated as possible; avoid having to implement HomeController.Login() since the Login Form might appear elsewhere All but the redirect works. My approach thus far has been: Home/Index includes Login Form: <%Html.RenderAction("Login","User");%> User/Login ViewUserControl<UserLoginViewModel> includes: <%=Html.ValidationSummary("") % using(Html.BeginForm()){} includes hidden form field "userlogin"="1" public class UserController : BaseController { ... [AcceptPostWhenFieldExists(FieldName = "userlogin")] public ActionResult Login(UserLoginViewModel model, FormCollection form){ if (ModelState.IsValid) { if(checkUserCredentials()) { setUserCredentials() return this.RedirectToAction<Areas.App.Controllers.HomeController>(x = x.Index()); } else { return View(); } } ... } Works great when: ModelState or User Credentials fail -- return View() does yield to Home/Index and displays appropriate validation summary. (I have a Register Form on the same page, using the same structure. Each form's validation summary only shows when that form is submitted.) Fails when: ModelState and User Credentials valid -- RedirectToAction<>() gives following error: "Child actions are not allowed to perform redirect actions." It seems like in the Classic ASP days, this would've been solved with Response.Buffer=True. Is there an equivalent setting or workaround now? Btw, running: ASP.Net 4, MVC 2, VS 2010, Dev/Debugging Web Server I hope all of that makes sense. So, what are my options? Or where am I going wrong in my approach? tia!

    Read the article

  • How to publish an ASP.NET MVC website

    - by Luke Puplett
    Hello -- I've a site that I'd like to publish to a co-located live server. I'm finding this simple task quite hard. My problems begin with the Web Deploy tool (1.1) giving me a 401 Unauthorized as the adminstrator because port :8172 comes up in the errors and this port is blocked - but the documentation says "The default ListenURL is http://+:80/MsDeployAgentService"! I'm loathe to open another port and I've little patience these days so I thought bu66er it, I'll create a Web Deploy package and import it into IIS on the server over RDP. I notice first that Visual Studio doesn't use a dialog box to gather settings, or use my Publish profiles but seems to use a tab in the project properties, although I think these are ignored when importing the package anyway? I'm now sitting in the import wizard with Application Path and Connection String. I've cleared the conn string as I think this is for some ASP stuff I don't use but when I enter nothing in the Application Path, the wizard barks at me saying that basically I'm a weirdo because most people publish to folders beneath the root site. Now, I want my site to be site.com/Home/About and not site.com/subfolder/Home/About and I think being an MVC routed site that a subfolder will introduce other headaches. Should I go ahead and use the root? Finally, I also want to publish a web service to www.site.com/services/soap which I think IIS can handle. While typing this question, Amazon have delivered my IIS 7 Resource Kit, and I've been scouring the internet but actually I'm getting more confused. Comment here seems to show consensus opinion that Publish isn't for production sites and that real men roll their own. http://stackoverflow.com/questions/260525/asp-net-website-publish-vs-web-deployment-project ...I guess this was pre- Web Deployment Tool era? I'm going to experiment on a spare box for now but any assistance is welcome. Luke

    Read the article

  • Is this a bug? : I get " The type ... is not a complex type or an entity type" in my WCF data servic

    - by veertien
    When invoking a query on the data service I get this error message inside the XML feed: <m:error> <m:code></m:code> <m:message xml:lang="nl-NL">Internal Server Error. The type 'MyType' is not a complex type or an entity type.</m:message> </m:error> When I use the example described here in the article "How to: Create a Data Service Using the Reflection Provider (WCF Data Services)" http://msdn.microsoft.com/en-us/library/dd728281(v=VS.100).aspx it works as expected. I have created the service in a .NET 4.0 web project. My data context class returns a query object that is derived from the LINQExtender (http://linqextender.codeplex.com/). When I execute the query object in a unit test, it works as expected. My entity type is defined as: [DataServiceKey("Id")] public class Accommodation { [UniqueIdentifier] [OriginalFieldName("EntityId")] public string Id { get; set; } [OriginalFieldName("AccoName")] public string Name { get; set; } } (the UniqueIdentifier and OriginalFieldName attributes are used by LINQExtender) Does anybody know if this is a bug in WCF data services or am I doing something wrong?

    Read the article

  • Wix create non advertised shortcut for all users / per machine

    - by mcdon
    In WIX, how do you create a non advertised shortcut in the allusers profile? So far I've only been able to accomplish this with advertised shortcuts. I prefer advertised shortcuts because you can go to the shortcut's properties and use "find target". In the tutorials I've seen use a registry value for the keypath of a shortcut. The problem is they use HKCU as the root. When HKCU is used, and another user uninstalls the program (since it's installed for all users) the registry key is left behind. When I use HKMU as the root I get an ICE57 error, but the key is removed when another user uninstalls the program. I seem to be pushed towards using HKCU though HKMU seems to behave correctly (per-user vs all-users). When I try to create the non advertised shortcut I get various ICE error such as ICE38, ICE43, or ICE 57. Most articles I've seen recommend "just ignore the ice errors". There must be a way to create the non advertised shortcuts, without creating ICE errors. Please post sample code for a working example.

    Read the article

  • Web service SSL handshake fails in production environment unless SSL debugging enabled

    - by JST
    Scenario: calling a client web service over SSL (https) with mutual SSL authentication. Different service endpoint URLs and certs (both keystore and truststore) for test vs. production environments. Both test and production environments run tomcat / JBoss clustered. Production environment has load balancing / BigIP, runs Blade and non-Blade machines. Truststore is set (using -Djavax.net.ssl.trustStore=value) at startup. Keystore is set using System.setProperty("javax.net.ssl.keyStore", "value") in Java code. Web service call made using Axis2. All works fine in test environment, but when we moved to production environment (6 servers), it appears certs are not being forwarded for the handshake. Here's what we've done: in test environment, handshake using test versions of certs has been working all along, with no ssl debugging enabled confirmed in test environment that handshake with client production endpoint succeeds (production certs, both ours and theirs, are fine) -- this was done using -Djavax.net.debug=handshake,ssl confirmed that the error condition occurs on all 6 production servers took one server out of the cluster, turned on ssl debugging for just that one (with a restart), hit it directly, handshake works! switched to a different server without the debugging turned on, handshake error condition occurs turned debugging on on that second server (with a restart), hit it directly, handshake works! From the evidence, it seems like somehow the debugging being enabled causes the certificates to be properly retrieved/conveyed, although that makes no sense! I wonder whether somehow the enabled debugging makes the system pay attention to the System.setProperty call, and ignore it otherwise. However, in local and test environments, handshake worked without debugging enabled. Do I maybe need to be setting keystore on server startup like I'm setting truststore? Have been avoiding that because the keystore will differ for each of our test environments (16 of them).

    Read the article

  • HttpModule with ASP.NET MVC not being called

    - by mgroves
    I am trying to implement a session-per-request pattern in an ASP.NET MVC 2 Preview 1 application, and I've implemented an IHttpModule to help me do this: public class SessionModule : IHttpModule { public void Init(HttpApplication context) { context.Response.Write("Init!"); context.EndRequest += context_EndRequest; } // ... etc... } And I've put this into the web.config: <system.web> <httpModules> <add name="SessionModule" type="MyNamespace.SessionModule" /> </httpModules> <system.webServer> <modules runAllManagedModulesForAllRequests="true"> <remove name="SessionModule" /> <add name="SessionModule" type="MyNamespace.SessionModule" /> </modules> However, "Init!" never gets written to the page (I'm using the built-in VS web server, Cassini). Additionally, I've tried putting breakpoints in the SessionModule but they never break. What am I missing?

    Read the article

  • Does RabbitMq do round-robin from the exchange to the queues

    - by Lancelot
    Hi, I am currently evaluating message queue systems and RabbitMq seems like a good candidate, so I'm digging a little more into it. To give a little context I'm looking to have something like one exchange load balancing the message publishing to multiple queues. I don't want to replicate the messages, so a fanout exchange is not an option. Also the reason I'm thinking of having multiple queues vs one queue handling the round-robin w/ the consumers, is that I don't want our single point of failure to be at the queue level. Sounds like I could add some logic on the publisher side to simulate that behavior by editing the routing key and having the appropriate bindings in place. But that's kind of a passive approach that wouldn't take the pace of the message consumption on each queue into account, potentially leading to fill up one queue if the consumer applications for that queue are dead. I was looking for a more pro-active way from the exchange entity side, that would decide where to send the next message based on each queue size or something of that nature. I read about Alice and the available RESTful APIs but that seems kind of a heavy duty solution to implement fast routing decisions. Anyone knows if round-robin between the exchange the queues is feasible w/ RabbitMQ then? Thanks.

    Read the article

  • Sentiment analysis for twitter in python

    - by Ran
    I'm looking for an open source implementation, preferably in python, of Textual Sentiment Analysis (http://en.wikipedia.org/wiki/Sentiment_analysis). Is anyone familiar with such open source implementation I can use? I'm writing an application that searches twitter for some search term, say "youtube", and counts "happy" tweets vs. "sad" tweets. I'm using Google's appengine, so it's in python. I'd like to be able to classify the returned search results from twitter and I'd like to do that in python. I haven't been able to find such sentiment analyzer so far, specifically not in python. Are you familiar with such open source implementation I can use? Preferably this is already in python, but if not, hopefully I can translate it to python. Note, the texts I'm analyzing are VERY short, they are tweets. So ideally, this classifier is optimized for such short texts. BTW, twitter does support the ":)" and ":(" operators in search, which aim to do just this, but unfortunately, the classification provided by them isn't that great, so I figured I might give this a try myself. Thanks! BTW, an early demo is here and the code I have so far is here and I'd love to opensource it with any interested developer.

    Read the article

  • AppDomain.CurrentDomain.UnhandledException doesn't always fire up

    - by Simon T.
    I encountered an exception in our application that isn't handled at all. I really don't know what to look for to debug this problem since the application close immediately when this peculiar exception is thrown (even running from VS). The exception handling is setup that way: [STAThread] [LoaderOptimizationAttribute(LoaderOptimization.MultiDomainHost)] static void Main() { Application.SetUnhandledExceptionMode(UnhandledExceptionMode.CatchException); Application.ApplicationExit += new EventHandler(ApplicationExitHandler); Application.ThreadException += new ThreadExceptionEventHandler(ThreadExceptionHandler); AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(UnhandledExceptionHandler); ... The thread from which the exception is thrown is started that way: Thread executerThread = new Thread(new ThreadStart(modele.Exporter)); executerThread.SetApartmentState(ApartmentState.STA); executerThread.Start(); Now, every unhandled exception thrown from that thread fire up our UnhandledExceptionHandler except the one I have problems with. Even if I catch the problematic exception and throw it again, the application closes silently. None of the 3 handlers (ApplicationExit, ThreadException, UnhandledException) get fired (breakpoints not hit). There is nothing so exceptional in that exception (see details here: http://pastebin.com/fCnDRRiJ).

    Read the article

  • Designing a fluid Javascript interface to abstract away the asynchronous nature of AJAX

    - by Anurag
    How would I design an API to hide the asynchronous nature of AJAX and HTTP requests, or basically delay it to provide a fluid interface. To show an example from Twitter's new Anywhere API: // get @ded's first 20 statuses, filter only the tweets that // mention photography, and render each into an HTML element T.User.find('ded').timeline().first(20).filter(filterer).each(function(status) { $('div#tweets').append('<p>' + status.text + '</p>'); }); function filterer(status) { return status.text.match(/photography/); } vs this (asynchronous nature of each call is clearly visible) T.User.find('ded', function(user) { user.timeline(function(statuses) { statuses.first(20).filter(filterer).each(function(status) { $('div#tweets').append('<p>' + status.text + '</p>'); }); }); }); It finds the user, gets their tweet timeline, filters only the first 20 tweets, applies a custom filter, and ultimately uses the callback function to process each tweet. I am guessing that a well designed API like this should work like a query builder (think ORMs) where each function call builds the query (HTTP URL in this case), until it hits a looping function such as each/map/etc., the HTTP call is made and the passed in function becomes the callback. An easy development route would be to make each AJAX call synchronous, but that's probably not the best solution. I am interested in figuring out a way to make it asynchronous, and still hide the asynchronous nature of AJAX.

    Read the article

  • Why is TransactionScope operation is not valid?

    - by Cragly
    I have a routine which uses a recursive loop to insert items into a SQL Server 2005 database The first call which initiates the loop is enclosed within a transaction using TransactionScope. When I first call ProcessItem the myItem data gets inserted into the database as expected. However when ProcessItem is called from either ProcessItemLinks or ProcessItemComments I get the following error. “The operation is not valid for the state of the transaction” I am running this in debug with VS 2008 on Windows 7 and have the MSDTC running to enable distributed transactions. The code below isn’t my production code but is set out exactly the same. The AddItemToDatabase is a method on a class I cannot modify and uses a standard ExecuteNonQuery() which creates a connection then closes and disposes once completed. I have looked at other posting on here and the internet and still cannot resolve this issue. Any help would be much appreciated. using (TransactionScope processItem = new TransactionScope()) { foreach (Item myItem in itemsList) { ProcessItem(myItem); } processItem.Complete(); } private void ProcessItem(Item myItem) { AddItemToDatabase(myItem); ProcessItemLinks(myItem); ProcessItemComments(myItem); } private void ProcessItemLinks(Item myItem) { foreach (Item link in myItem.Links) { ProcessItem(link); } } private void ProcessItemComments(Item myItem) { foreach (Item comment in myItem.Comments) { ProcessItem(comment); } } Here is top part of the stack trace. Unfortunatly I cant show the build up to this point as its company sensative information which I can not disclose. Hope this is enough information. at System.Transactions.TransactionState.EnlistPromotableSinglePhase(InternalTransaction tx, IPromotableSinglePhaseNotification promotableSinglePhaseNotification, Transaction atomicTransaction) at System.Transactions.Transaction.EnlistPromotableSinglePhase(IPromotableSinglePhaseNotification promotableSinglePhaseNotification) at System.Data.SqlClient.SqlInternalConnection.EnlistNonNull(Transaction tx) at System.Data.SqlClient.SqlInternalConnection.Enlist(Transaction tx) at System.Data.SqlClient.SqlInternalConnectionTds.Activate(Transaction transaction) at System.Data.ProviderBase.DbConnectionInternal.ActivateConnection(Transaction transaction) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open()

    Read the article

  • Starting out Silverlight 4 design

    - by Fermin
    I come from mainly a web development background (ASP.NET, ASP.NET MVC, XHTML, CSS etc) but have been tasked with creating/designing a Silverlight application. The application is utilising Bing Maps control for Silverlight, this will be contained in a user control and will be the 'main' screen in the system. There will be numerous other user controls on the form that will be used to choose/filter/sort/order the data on the map. I think of it like Visual Studio: the Bing Maps will be like the code editor window and the other controls will be like Solutions Explorer, Find Results etc. (although a lot less of them!) I have read up and I'm comfortable with the data side (RIA-Services) of the application. I've (kinda) got my head around databinding and using a view model to present data and keep the code behind file lite. What I do need some help on is UI design/navigation framework, specifically 2 aspects: How do I best implement a fluid design so that the various user controls which filter the map data can be resized/pinned/unpinned (for example, like the Solution Explorer in VS)? I made a test using a Grid with a GridSplitter control, is this the best way? Would it be best to create a Grid/Gridsplitter with Navigation Frames inside the grid to load the content? Since I have multiple user controls that basically use the same set of data, should I set the dataContext at the highest possible level (e.g. if using a grid with multiple frames, at the Grid level?). Any help, tips, links etc. will be very much appreciated!

    Read the article

  • File permissions with FileSystemObject - CScript.exe says one thing, Classic ASP says another...

    - by Dylan Beattie
    I have a classic ASP page - written in JScript - that's using Scripting.FileSystemObject to save files to a network share - and it's not working. ("Permission denied") The ASP page is running under IIS using Windows authentication, with impersonation enabled. If I run the following block of code locally via CScript.exe: var objNet = new ActiveXObject("WScript.Network"); WScript.Echo(objNet.ComputerName); WScript.Echo(objNet.UserName); WScript.Echo(objNet.UserDomain); var fso = new ActiveXObject("Scripting.FileSystemObject"); var path = "\\\\myserver\\my_share\\some_path"; if (fso.FolderExists(path)) { WScript.Echo("Yes"); } else { WScript.Echo("No"); } I get the (expected) output: MY_COMPUTER dylan.beattie MYDOMAIN Yes If I run the same code as part of a .ASP page, substituting Response.Write for WScript.Echo I get this output: MY_COMPUTER dylan.beattie MYDOMAIN No Now - my understanding is that the WScript.Network object will retrieve the current security credentials of the thread that's actually running the code. If this is correct - then why is the same user, on the same domain, getting different results from CScript.exe vs ASP? If my ASP code is running as dylan.beattie, then why can't I see the network share? And if it's not running as dylan.beattie, why does WScript.Network think it is?

    Read the article

  • Python: fetching SVG file using urllib is returning binary when I need ASCII

    - by Drew Dara-Abrams
    I'm using urllib (in Python) to fetch an SVG file: import urllib urllib.urlopen('http://alpha.vectors.cloudmade.com/BC9A493B41014CAABB98F0471D759707/-122.2487,37.87588,-122.265823,37.868054?styleid=1&viewport=400x231').read() which produces output of the sort: xb6\xf6\x00\xb3\xfb2\xff\xda\xc5\xf2\xc2\x14\xef\xcd\x82\x0b\xdbU\xb0\x81\xcaF\xd8\x1a\xf6\xdf[i)\xba\xcf\x80\xab\xd6\x8c\xe3l_\xe7\n\xed2,\xbdm\xa0_|\xbb\x12\xff\xb6\xf8\xda\xd9\xc3\xd9\t\xde\x9a\xf8\xae\xb3T\xa3\r`\x8a\x08!T\xfb8\x92\x95\x0c\xdd\x8b!\x02P\xea@\x98\x1c^\xc7\xda\\\xec\xe3\xe1\xbe,0\xcd\xbeZ~\x92\xb3\xfa\xdd\xfcbyu\xb8\x83\xbb\xbdS\x0f\x82\x0b\xfe\xf5_\xdawn\xff\xef_\xff\xe5\xfa\x1f?\xbf\xffoZ\x0f\x8b\xbfV\xf4\x04\x00' when I was expecting more like this: <?xml version='1.0' encoding='UTF-8'?> <svg xmlns="http://www.w3.org/2000/svg" xmlns:cm="http://cloudmade.com/" width="400" height="231"> <rect width="100%" height="100%" fill="#eae8dd" opacity="1"/> <g transform="scale(0.209849975856)"> <g transform="translate(13610569, 4561906)" flood-opacity="0.1" flood-color="grey"> <path d="M -13610027.720000000670552 -4562403.660000000149012 I guess this is an issue of binary vs. ASCII. Can anyone help me (a Python newbie) with the appropriate conversion so that I can get on with parsing and manipulating the SVG code?

    Read the article

  • Is an LSA MSV1_0 subauthentication package needed for some impersonation use cases?

    - by Chris Sears
    Greetings, I'm working with a vendor who has implemented some code that uses a Windows LSA MSV1_0 subauthentication package (MSDN info if you're interested: http://msdn.microsoft.com/en-us/library/aa374786(VS.85).aspx ) and I'm trying to figure out if it's necessary. As far as I can tell, the subauthentication routine and filter allow for hooking or customizing the standard LSA MSV1_0 logon event processing. The issue is that I don't understand why the vendor's product would need these capabilities. I've asked them and they said they use it to perform impersonation. The product definitely does need to do impersonation, but based on my limited win32 knowledge, they could get the functionality they need using the normal auth APIs (LsaLogonUser, ImpersonateLoggedOnUser, etc) without the subauthentication package. Furthermore, I've worked with a number of similar products that all do impersonation, and this is the only one that's used a subauthentication package. If you're wondering why I would care, a previous version of the product had a bug in the subauthentication package dll that would cause lockups or bluescreens. That makes me rather nervous and has me questioning the use of such a low-level, kernel sensitive interface. I'd like to go back to the vendor and say "There's no way you could need an LSA subauth package for impersonation - take it out", but I'm not sure I understand the use cases and possible limitations of the standard win32 authentication/impersonation APIs well enough to make that claim definitively. So, to the win32 security gurus out there, is there any reason you would need an LSA MSV1_0 subauthentication package if all you were doing is impersonation? Thanks in advance for any thoughts!

    Read the article

  • HttpWebRequest: The request was aborted: The request was canceled.

    - by Emeka
    I've been working on developing a middle man application of sorts, which uploads text to a CMS backend using HTTP post requests for a series of dates (usually 7 at a time). I am using HttpWebRequest to accomplish this. It seems to work fine for the first date, but when it starts the second date I get the System.Net.WebException: The request was aborted: The request was canceled. I've searched around and found the following big clues: http://social.msdn.microsoft.com/Forums/en-US/netfxnetcom/thread/0d0afe40-c62a-4089-9d8b-fb4d206434dc http://www.jaxidian.org/update/2007/05/05/8 http://arnosoftwaredev.blogspot.com/2006/09/net-20-httpwebrequestkeepalive-and.html And they haven't been too helpful. I've tried overloading the GetWebReuqest but that doesn't make sense because I don't make any use of that function. Here is my code: http://pastebin.org/115268 I get the error on line 245 after it has run successfully at least once. I'd appreciate any help I can get as this is the last step in a project I've been working on for sometime. This is my first C#/VS project so I'm open to any tips but I would like to focus on getting this problem solved first. THanks!

    Read the article

  • Source Control Manager Backend

    - by Gabriel Parenza
    Hi Friends, What do you think is a better approach for Source Control Manager Backend. I am weighing File system vs Hosted Subversion service. Hosted Subversion-- (My company already has another group taking care of this) Advantages: * Zero maintenance on our end * Auto-backup and recovery * Reliability by auto-backup and file redundancy. * File history view in built, file merge, file diff On the other hand, while File system does not have the featured mentioned above but is much more simpler. Moreover, if files are hosted on Linux machine, which is backed up, it takes care of file system crash issues. Subversion will need working copies, which are going to be on this same Linux machine, and hence the need to not have an extra layer. Folks, I am looking for stronger reasons why I should take Subversion instead of keeping thing simple and going with File System. Let me know your opinions. Very thanks in advance, Gabriel. PS: I have explored few Commercial Source Manager, and have decide to go this route as it better suits our need.

    Read the article

  • C++ Formatting like visual studio c# formatting

    - by Fire-Dragon-DoL
    I like the way Visual studio (2008) format C# code; unfortunately it seems it doesn't behave in the same way when writing C++ code. For example, when I write a code in this way: class Test { public: int x; Test() {this->x=20;} ~Test(){} }; in C# (ok this is C++ but you can understand what I mean), this part: Test() {this->x=20;} Will become Test() { this->x=20; } This is obviusly a stupid example, but there are a lot of things where putting brackets in correct position, indenting code and other things with my own hands becomes boring. I can obviusly change editor if you suggest me a good one for C++ code, I would like to find something with these features: Intellisense (like vs, at least similiar) Custom class coloring (in c# they are cyan, why are they black in c++?) Wordwrap (possibly) Documentation when you mouse over a method/variable Auto formatting (when you close a bracket like "}" in c# you'll get everything well formatted) obviusly I can find other features, but this is what is in my mind at the moment. Thanks for any suggestion

    Read the article

  • User Control SiteMap Provider Rendering Error

    - by Serexx
    I have created a Custom Server Control that renders the SiteMap as a pure UL for CSS styling. At run-time it renders properly but in VS2008 Design View VS shows this error: Error Rendering Control - menuMainAn unhandled exception has occurred. The provider 'AspNetXmlSiteMapProvider' specified for the defaultProvider does not exist in the providers collection. I have 'AspNetXmlSiteMapProvider' specified in web.config as per here : link text While I am happy that the code runs properly, the Designer error is bothersome if the underlying issue might cause the code to break in some circumstances so I need to understand what is going on... The code explicitly references the sitemap in the Render Method with : int level = 1; string ul = string.Format("<div class='{0}' id='{1}'>{2}</div>", CssClassName, this.ID.ToString(), EnumerateNodesRecursive(SiteMap.RootNode, level)); output.Write(ul); and the recursive method called referrences SiteMap.CurrentNode. Otherwise there are no explicit sitemap references in the code. Does anyone have any ideas why Deisgner is complaining?

    Read the article

  • conditionals for C++ using MSBuild/vsbuild?

    - by redtuna
    I have a C++ project in Visual Studio 2008 and I'd like to be able to compile several versions from the command line, defining conditional variables (aka #define). If it were just a single file to compile I'd use something like cl /D, but this is complex enough that I would like to be able to use VS's other features like the build order etc. I've seen a similar question asked in stackoverflow and the answer was to use /p:DefineConstants="var1;var2". This doesn't seem to work with C++ though. The other problem with that answer is that it replaces the conditional variables, instead of adding to them. The vcproj files for C++ look quite different. If msbuild (or vsbuild) had a way to change Configurations/Tool[name="VCCLCompilerTool"] we'd be golden. But I haven't found such an option. The vcproj files are under source control so I'd rather not have a script mess with them. I've considered doubling the number of configurations (one with the #define, one without). That'd be annoying, and I'm especially unhappy with having to modify these configurations in tandem every time I need to modify anything there. A previous similar question found no solution. I'm hoping that has changed since? How would you go about building those variants (with and without define) from the command line? Thanks!

    Read the article

  • db4o Replication System: NullReferenceException?

    - by virtualmic
    Hi, I am trying to do standard bi-directional replication as follows. However, I get a NullReferenceException. This is a separate replication project. I did import the classes involved in the original project (such as Item, Category etc.) in this replication project. What am I doing wrong? (If I debug using VS, I can see that changedObjects does have all the changed objects; there seems to be some problem inside Replicate function) IObjectContainer local = Db4oFactory.OpenFile(@"G:\Work\School\MIS\VINMIS\Inventory\bin\Debug\vin.db4o"); IObjectContainer far = Db4oFactory.OpenFile(@"\\crs-lap\c$\vinmis\vin.db4o"); ; IReplicationSession replication = Replication.Begin(local, far); IObjectSet changedObjects = replication.ProviderA().ObjectsChangedSinceLastReplication(); while(changedObjects.HasNext()) replication.Replicate(changedObjects.Next()); // Exception!!! replication.Commit(); changedObjects = replication.ProviderB().ObjectsChangedSinceLastReplication(); while (changedObjects.HasNext()) replication.Replicate(changedObjects.Next()); replication.Commit(); Regards, Saurabh.

    Read the article

< Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >