Search Results

Search found 11135 results on 446 pages for 'thread safe'.

Page 309/446 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • How do I get folder size with Exchange Web Services 2010 Managed API?

    - by Adam Tuttle
    I'm attempting to use EWS 2010 Managed API to get the total size of a user's mailbox. I haven't found a web service method to get this data, so I figured I would try to calculate it. I found one seemingly-applicable question on another site about finding mailbox sizes with EWS 2007, but either I'm not understanding what it's asking me to do, or that method just doesn't work with EWS 2010. Noodling around in the code insight, I was able to write what I thought was a method that would traverse the folder structure recursively and result in a combined total for all folders inside the Inbox: private int traverseChildFoldersForSize(Folder f) { int folderSizeSum = 0; if (f.ChildFolderCount > 0) { foreach (Folder c in f.FindFolders(new FolderView(10000))) { folderSizeSum += traverseChildFoldersForSize(c); } } folderSizeSum += (int)f.ManagedFolderInformation.FolderSize; return folderSizeSum; } (Assumes there aren't more than 10,000 folders inside a given folder. Figure that's a safe bet...) Unfortunately, this doesn't work. I'm initiating the recursion with this code: Folder root = Folder.Bind(svc, WellKnownFolderName.Inbox); int totalSize = traverseChildFoldersForSize(root); But a Null Pointer Exception is thrown, essentially saying that [folder].ManagedFolderInformation is a null object reference. For clarity, I also attempted to just get the size of the root folder: Console.Write(root.ManagedFolderInformation.FolderSize.ToString()); Which threw the same NPE exception, so I know that it's not just that once you get to a certain depth in the directory tree that ManagedFolderInformation doesn't exist. Any ideas on how to get the total size of the user's mailbox? Am I barking up the wrong tree?

    Read the article

  • Visual Studio 2010 Can no longer build .NET v3.5

    - by Adam Driscoll
    I have a 2010 project that is targeting .NET v3.5. Inexplicably I can no longer build v3.5 projects. The project doesn't have ANY references added. It won't even let me add a reference to System.Core as it is added by the 'build system'. warning CS1685: The predefined type 'System.Func' is defined in multiple assemblies in the global alias; using definition from 'c:\Windows\Microsoft.NET\Framework\v4.0.30319\mscorlib.dll' IFilter.cs(82,49): error CS0433: The type 'System.Func' exists in both 'c:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.5\System.Core.dll' and 'c:\Windows\Microsoft.NET\Framework\v4.0.30319\mscorlib.dll' Looks like something is grabbing onto 4.0 but I'm not quite sure how to fix it. Any one else run into this? Coworker had this same issue. It took a reinstall of Windows to correct the problem I've opened a bug on this one: https://connect.microsoft.com/VisualStudio/feedback/details/558245/warning-cs1685-when-compiling-a-v3-5-net-application-in-visual-studio-2010 If the compiler is set to verbose I see this: FrameworkPathOverride = C:\Windows\Microsoft.NET\Framework\v4.0.30319 which is defined as: Specifies the location of mscorlib.dll and microsoft.visualbasic.dll. This parameter is equivalent to the /sdkpath switch of the vbc.exe compiler. Some other interesting tidbits: I've created a new project all together and cannot build v3.5 at all. I can build 2.0, 3.0, 3.5 Client Profile, 4.0 and 4.0 Client Profile with no problem. VB.NET can build v3.5 but C# cannot. I've tried a reinstall of .NET 3.5, 4.0 and Visual Studio 2010 with no success. Visual Studio debug logs shown nothing interesting and Safe Mode does not work. Trying to avoid a Windows reinstall...

    Read the article

  • malloc: error checking and freeing memory

    - by yCalleecharan
    Hi, I'm using malloc to make an error check of whether memory can be allocated or not for the particular array z1. ARRAY_SIZE is a predefined with a numerical value. I use casting as I've read it's safe to do so. long double *z1 = (long double *)malloc(sizeof (long double) * ARRAY_SIZE); if(z1 == NULL){ printf("Out of memory\n"); exit(-1); } The above is just a snippet of my code, but when I add the error checking part (contained in the if statement above), I get a lot of compile time errors with visual studio 2008. It is this error checking part that's generating all the errors. What am I doing wrong? On a related issue with malloc, I understand that the memory needs to be deallocated/freed after the variable/array z1 has been used. For the array z1, I use: free(z1); z1 = NULL; Is the second line z1 = NULL necessary? Thanks a lot...

    Read the article

  • Saving record in Subsonic 3 using Active Record

    - by singfoom
    I'm having trouble saving a record in Subsonic 3 using Active record. I've generated my objects using the DALs and tts and everything seems fine because the following test passes. I think that my connection string is correct or the generation wouldn't have succeeded. [Test] public void TestSavingAnEmail() { Email testEmail = new Email(); testEmail.EmailAddress = "[email protected]"; testEmail.Subscribed = true; testEmail.Save(); Assert.AreEqual(1, Email.All().Count()); } On the live side, the following code fails: protected void btEmailSubmit_Click(object sender, EventArgs e) { Email email = new Email(); email.EmailAddress = txtEmail.Text; email.Subscribed = chkSubscribe.Checked; email.Save(); } with a message of: Need to specify Values or a Select query to insert - can't go on! at the following line repo.Add(this,provider); line in my ActiveRecord.cs: public void Add(IDataProvider provider){ var key=KeyValue(); if(key==null){ var newKey=_repo.Add(this,provider); this.SetKeyValue(newKey); }else{ _repo.Add(this,provider); } SetIsNew(false); OnSaved(); } Am I doing something horribly wrong here? The save and add methods have parameterless overloads that I thought were safe to use. Do I need to pass a provider? I've googled around for this for a while and was unable to come up with anything specific to my situation. Thanks in advance for any kind of answer.

    Read the article

  • Debugging StoreKit functionality without a device

    - by Rolf Staflin
    Hi! I'm porting an app with in-app purchase functionality from the iPhone to the iPad, but the simulator doesn't handle any StoreKit calls (they fail immediately with a warning). Since there are no iPads available yet, I can't use a device for debugging. I have thought of a few alternatives: Use the iPhone code as is. This is the safe bet since everything already works on the iPhone, but it will not feel right on the iPad (there are several ViewControllers in a NavigationController involved on the iPhone, but everything could fit on one screen on the pad). Write a test framework for StoreKit. I actually started doing this, but I won't be able to finish it in time. Submit untested code and pray. Nah, just kidding -- that's not an option :) So, any thoughts? Has anyone else written a test framework for this kind of thing? I googled a bit but couldn't find anything. I'm grateful for any thoughts on this!

    Read the article

  • How to get current Joomla user with external PHP script

    - by Joey Adams
    I have a couple PHP scripts used for AJAX queries, but I want them to be able to operate under the umbrella of Joomla's authentication system. Is the following safe? Are there any unnecessary lines? joomla-auth.php (located in the same directory as Joomla's index.php): <?php define( '_JEXEC', 1 ); define('JPATH_BASE', dirname(__FILE__)); define( 'DS', DIRECTORY_SEPARATOR ); require_once ( JPATH_BASE .DS.'includes'.DS.'defines.php' ); require_once ( JPATH_BASE .DS.'includes'.DS.'framework.php' ); /* Create the Application */ $mainframe =& JFactory::getApplication('site'); /* Make sure we are logged in at all. */ if (JFactory::getUser()->id == 0) die("Access denied: login required."); ?> test.php: <?php include 'joomla-auth.php'; echo 'Logged in as "' . JFactory::getUser()->username . '"'; /* We then proceed to access things only the user of that name has access to. */ ?>

    Read the article

  • Hadoop safemode recovery - taking lot of time

    - by Algorist
    Hi, We are running our cluster on Amazon EC2. we are using cloudera scripts to setup hadoop. On the master node, we start below services. 609 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start namenode' 610 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start secondarynamenode' 611 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start jobtracker' 612 613 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop dfsadmin -safemode wait' On the slave machine, we run the below services. 625 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start datanode' 626 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start tasktracker' The main problem we are facing is, hdfs safemode recovery is taking more than an hour and this is causing delays in our job completion. Below are the main log messages. 1. domU-12-31-39-0A-34-61.compute-1.internal 10/05/05 20:44:19 INFO ipc.Client: Retrying connect to server: ec2-184-73-64-64.compute-1.amazonaws.com/10.192.11.240:8020. Already tried 21 time(s). 2. The reported blocks 283634 needs additional 322258 blocks to reach the threshold 0.9990 of total blocks 606499. Safe mode will be turned off automatically. The first message is thrown in task trackers log because, job tracker is not started. job tracker didn't start because of hdfs safemode recovery. The second message is thrown during the recovery process. Is there something I am doing wrong? How much time does normal hdfs safemode recovery takes? Will there be any speedup, by not starting task trackers till job tracker is started? Are there any known hadoop problems on amazon cluster? Thanks for your help. Regards Bala Mudiam

    Read the article

  • System.Web.Caching vs. Enterprise Library Caching Block

    - by ESV
    For a .NET component that will be used in both web applications and rich client applications, there seem to be two obvious options for caching: System.Web.Caching or the Ent. Lib. Caching Block. What do you use? Why? System.Web.Caching Is this safe to use outside of web apps? I've seen mixed information, but I think the answer is maybe-kind-of-not-really. a KB article warning against 1.0 and 1.1 non web app use The 2.0 page has a comment that indicates it's OK: http://msdn.microsoft.com/en-us/library/system.web.caching.cache(VS.80).aspx Scott Hanselman is creeped out by the notion The 3.5 page includes a warning against such use Rob Howard encouraged use outside of web apps I don't expect to use one of its highlights, SqlCacheDependency, but the addition of CacheItemUpdateCallback in .NET 3.5 seems like a Really Good Thing. Enterprise Library Caching Application Block other blocks are already in use so the dependency already exists cache persistence isn't necessary; regenerating the cache on restart is OK Some cache items should always be available, but be refreshed periodically. For these items, getting a callback after an item has been removed is not very convenient. It looks like a client will have to just sleep and poll until the cache item is repopulated. Memcached for Win32 + .NET client What are the pros and cons when you don't need a distributed cache?

    Read the article

  • Hidden features of Perl?

    - by Adam Bellaire
    What are some really useful but esoteric language features in Perl that you've actually been able to employ to do useful work? Guidelines: Try to limit answers to the Perl core and not CPAN Please give an example and a short description Hidden Features also found in other languages' Hidden Features: (These are all from Corion's answer) C# Duff's Device Portability and Standardness Quotes for whitespace delimited lists and strings Aliasable namespaces Java Static Initalizers JavaScript Functions are First Class citizens Block scope and closure Calling methods and accessors indirectly through a variable Ruby Defining methods through code PHP Pervasive online documentation Magic methods Symbolic references Python One line value swapping Ability to replace even core functions with your own functionality Other Hidden Features: Operators: The bool quasi-operator The flip-flop operator Also used for list construction The ++ and unary - operators work on strings The repetition operator The spaceship operator The || operator (and // operator) to select from a set of choices The diamond operator Special cases of the m// operator The tilde-tilde "operator" Quoting constructs: The qw operator Letters can be used as quote delimiters in q{}-like constructs Quoting mechanisms Syntax and Names: There can be a space after a sigil You can give subs numeric names with symbolic references Legal trailing commas Grouped Integer Literals hash slices Populating keys of a hash from an array Modules, Pragmas, and command-line options: use strict and use warnings Taint checking Esoteric use of -n and -p CPAN overload::constant IO::Handle module Safe compartments Attributes Variables: Autovivification The $[ variable tie Dynamic Scoping Variable swapping with a single statement Loops and flow control: Magic goto for on a single variable continue clause Desperation mode Regular expressions: The \G anchor (?{}) and '(??{})` in regexes Other features: The debugger Special code blocks such as BEGIN, CHECK, and END The DATA block New Block Operations Source Filters Signal Hooks map (twice) Wrapping built-in functions The eof function The dbmopen function Turning warnings into errors Other tricks, and meta-answers: cat files, decompressing gzips if needed Perl Tips See Also: Hidden features of C Hidden features of C# Hidden features of C++ Hidden features of Java Hidden features of JavaScript Hidden features of Ruby Hidden features of PHP Hidden features of Python

    Read the article

  • Java Generics: Dealing with an intermediate unknown type

    - by Matt
    I am trying to figure out a way to deal with a particular problem in a type safe manner, or to put it more specifically without any explicit casts. I have a class that takes in a generic request and returns a generic response like such: public class RetrievalProcessor<Req extends Request, Resp> implements RequestProcessor<Req, Resp>{ private Dao<Req> dao; private RawResponseTransformer<Resp> transformer; @Override public Resp process(Req request) { return transformer.transformResponse(dao.retrieveRawResponse(request)); } } My problem is the following. My Dao object can be many different things REST, JDBC, some other proprietary object. I can't be certain of the type of object that the Dao will return. I do know the type of object my caller would like and that is the Resp type on the generic, and the job of the RawResponseTransformer is to transform that Dao response into something that the caller can consume. The problem I have is I can't figure out a way that feels clean to do that. I have considered putting the intermediate type as part of the definition of the class, but it doesn't seem like the caller should know, or really care, what the intermediate form is. Hoping someone might have a good clean idea for handling this.

    Read the article

  • SQL 2000 (MSDE) Hangs When It Receives an Erroneous Query from a Classic ASP Web Application

    - by Jimbo
    I have a SQL interface page in my classic ASP web app that allows admin users to run queries against the app's database (MSDE 2000) - it simply consists of a textarea that the user submits and the app returns the resulting list of records as below Dim oRS Set oRS = Server.CreateObject("ADODB.Recordset") oRS.ActiveConnection = sConnectionString // run the query - this is for the admin only so doesnt check for sql safe commands etc. oRS.Open Request.Form("txtSQL") If Not oRS.EOF Then // list the field names from the recordset For i = 0 to oRS.Fields.Count - 1 Response.Write oRS.Fields(i).name & "&nbsp;" Next // show the data for each record in the recordset While Not oRS.EOF For i = 0 to oRS.Fields.Count - 1 Response.Write oRS.Fields(i).value & "&nbsp;" Next Response.Write "<br />" oRS.Movenext() Wend End If The problem with this is that if you send it an invalid query (with a spelling mistake, invalid join etc.) instead of throwing back an error immediately, it hangs IIS (you can see this by trying to browse the app from another computer, it fails) for a number of minutes and THEN returns the error. I have NO idea why! Can anyone help?

    Read the article

  • Binary to strings as binary? C #/.NET

    - by acidzombie24
    Redis keys are binary safe. I'd like to mess around and put binary into redis using C#. My client of choice doesn't support writing binary keys it uses keys and it make sense. However i am just fooling around so tell me how i can do this. How do i convert a raw byte[] into a string? At first i was thinking about converting a byte[] to a utf8 string however unicode has some checks to see if its valid or not. So raw binary should fail. Actually i tried it out. Instead of failing i got a strange result. My main question is how do i convert a raw byte[] to the equivalent string? My unimportant question is why did i get a 512 byte string instead of an exception saying this is not a valid UTF8 string? code var rainbow = new byte[256]; for (int i = 0; i < 256; i++) { rainbow[i] = (byte)i; } var sz = Encoding.UTF8.GetString(rainbow); var szarr = Encoding.UTF8.GetBytes(sz); Console.WriteLine("{0} {1} {2}", ByteArraysEqual(szarr, rainbow), szarr.Length, rainbow.Length); Output False 512 256

    Read the article

  • WCF MessageHeaders in OperationContext.Current

    - by Nate Bross
    If I use code like this [just below] to add Message Headers to my OperationContext, will all future out-going messages contain that data on any new ClientProxy defined from the same "run" of my application? The objective, is to pass a parameter or two to each OpeartionContract w/out messing with the signature of the OperationContract, since the parameters being passed will be consistant for all requests for a given run of my client application. public void DoSomeStuff() { var proxy = new MyServiceClient(); Guid myToken = Guid.NewGuid(); MessageHeader<Guid> mhg = new MessageHeader<Guid>(myToken); MessageHeader untyped = mhg.GetUntypedHeader("token", "ns"); OperationContext.Current.OutgoingMessageHeaders.Add(untyped); proxy.DoOperation(...); } public void DoSomeOTHERStuff() { var proxy = new MyServiceClient(); Guid myToken = Guid.NewGuid(); MessageHeader<Guid> mhg = new MessageHeader<Guid>(myToken); MessageHeader untyped = mhg.GetUntypedHeader("token", "ns"); OperationContext.Current.OutgoingMessageHeaders.Add(untyped); proxy.DoOtherOperation(...); } In other words, is it safe to refactor the above code like this? bool isSetup = false; public void SetupMessageHeader() { if(isSetup) { return; } Guid myToken = Guid.NewGuid(); MessageHeader<Guid> mhg = new MessageHeader<Guid>(myToken); MessageHeader untyped = mhg.GetUntypedHeader("token", "ns"); OperationContext.Current.OutgoingMessageHeaders.Add(untyped); isSetup = true; } public void DoSomeStuff() { var proxy = new MyServiceClient(); SetupMessageHeader(); proxy.DoOperation(...); } public void DoSomeOTHERStuff() { var proxy = new MyServiceClient(); SetupMessageHeader(); proxy.DoOtherOperation(...); } Since I don't really understand what's happening there, I don't want to cargo cult it and just change it and let it fly if it works, I'd like to hear your thoughts on if it is OK or not.

    Read the article

  • AutoMapper How To Map Object A To Object B Differently Depending On Context

    - by IanT8
    Calling all AutoMapper gurus! I'd like to be able to map object A to object B differently depending on context at runtime. In particular, I'd like to ignore certain properties in one mapping case, and have all properties mapped in another case. What I'm experiencing is that Mapper.CreateMap can be called successfully in the different mapping cases however, once CreateMap is called, the map for a particular pair of types is set and is not subsequently changed by succeeding CreateMap calls which might describe the mapping differently. I found a blog post which advocates Mapper.Reset() to get round the problem, however, the static nature of the Mapper class means that it is only a matter of time before a collision and crash occur. Is there a way to do this? What I think I need is to call Mapper.CreateMap once per appdomain, and later, be able to call Mapper.Map with hints about which properties should be included / excluded. Right now, I'm thinking about changing the source code by writing a non-static mapping class that holds the mapping config instance based. Poor performance, but thread safe. What are my options. What can be done? Automapper seems so promising.

    Read the article

  • DotNetOpenAuth OpenID on ISA 2006 Reverse Proxy problem

    - by userb00
    I am trying to host my site that uses DotNetOpenAuth (OpenID) behind ISA 2006 (reverse proxy), and after it authenticated with a provider (such as Google), and it returns with a URL with %253A in the URL. However, ISA HTTP filter rejects the request. What I need to do is, on ISA web publishing rule, right click config HTTP policy properties uncheck "Verify Normalization" and it worked. Is this a problem on ISA 2006 generally? Are other firewalls having similar problems? Or, is it an OpenID or DotNetOpenAuth issue? Is it safe to disable Normalization checking on ISA? According to MSDN, quote "Web servers receive requests that are URL encoded. This means that certain characters may be replaced with a percent sign (%) followed by a particular number. For example, %20 corresponds to a space, so a request for http://myserver/My%20Dir/My%20File.htm is the same as a request for http://myserver/My Dir/My File.htm. Normalization is the process of decoding URL-encoded requests. Because the % can be URL encoded, an attacker can submit a carefully crafted request to a server that is basically double-encoded. If this occurs, Internet Information Services (IIS) may accept a request that it would otherwise reject as not valid. When you select Verify Normalization, the HTTP filter normalizes the URL two times. If the URL after the first normalization is different from the URL after the second normalization, the filter rejects the request. This prevents attacks that rely on double-encoded requests. Note that while we recommend that you use the Verify Normalization function, it may also block legitimate requests that contain a %."

    Read the article

  • Best practices for class-mapping with SoapClient

    - by Foofy
    Using SoapClient's class mapping feature and it's pretty sweet. Unfortunately the SOAP service we're using has a bunch of read-only properties on some of the objects and will throw faults if the properties are passed back as anything but null. Need to filter out the properties before they're used in the SOAP call and am looking for advice on the best way to do it. So far the options are: Stick to a convention where I use getter and setter functions to manipulate the properties, and use property overloading to filter method access since only SoapClient would be doing that. E.g. developers would access properties like this: $obj->getAccountNumber() SoapClient would access properties like this: $obj->accountNumber I don't like this because the properties are still exposed and things could go wrong if developers don't stick to convention. Have a wrapper for SoapClient that sets a public property the mapped objects can check to see if the property is being accessed by SoapClient. I already have a wrapper that assigns a reference to itself to all the mapped objects. class SoapClientWrapper { public function __soapCall($method, $args) { $this->setSoapMode(true); $this->_soapClient->__soapCall($method, $args); $this->setSoapMode(false); } } class Invoice { function __get($val) { if($this->_soapClient->getSoapMode()) { return null; } else { return $this->$val; } } } This works but it doesn't feel right and seems a bit clunky. Do the mapping manually, and don't use SoapClient's mapping features. I'd just have a function on all the mapped objects that returns the safe-to-send properties. Also, nobody would have access to properties they shouldn't since I could enforce getters and setters. A lot more work, though.

    Read the article

  • UX question: is better to have "serious delete" or have "trash"

    - by ftrotter
    I am developing an application that allows for a user to manage some individual data points. One of the things that my users will want to do is "delete" but what should that mean? For a web application is it better to present a user with the option to have serious delete or to use a "trash" system? Under "serious delete" (would love to know if there is a better name for this...) you click "delete" and then the user is warned "this is final and tragic action. Once you do this you will not be able to get -insert data point name here- back, even if you are crying..." Then if they click delete... well it truly is gone forever. Under the "trash" model, you never trust that the user really wants to delete... instead you remove the data point from the "main display" and put into a bucket called "the trash". This gets it out of the users way, which is what they usually want, but they can get it back if they make a mistake. Obviously this is the way most operating systems have gone. The advantages of "serious delete" are: Easy to implement Easy to explain to users The disadvantages of "serious delete" are: it can be tragically final sometimes, cats walk on keyboards The advantages of the "trash" system are: user is safe from themselves bulk methods like "delete a bunch at once" make more sense saves support headaches The disadvantages of the "trash" system are": For sensitive data, you create an illusion of destruction users think something is gone, but it is not. Lots of subtle distinctions make implementation more difficult Do you "eventually" delete the contents of the trash? My question is which one is the right design pattern for modern web applications? With enough discussion to justify your answer... Would love to be pointed towards some relevant research. -FT

    Read the article

  • Different users get the same value in .ASPXANONYMOUS

    - by Malcolm Frexner
    My site allows anonymous users. I saw that under heavy load user get sometimes profile values from other users. This happens for anonymous users. I logged the access to profile data: /// <summary> /// /// </summary> /// <param name="controller"></param> /// <returns></returns> public static string ProfileID(this Controller controller ) { if (ApplicationConfiguration.LogProfileAccess) { StringBuilder sb = new StringBuilder(); (from header in controller.Request.Headers.ToPairs() select string.Concat(header.Key, ":", header.Value, ";")).ToList().ForEach(x => sb.Append(x)); string log = string.Format("ip:{0} url:{1} IsAuthenticated:{2} Name:{3} AnonId:{4} header:{5}", controller.Request.UserHostAddress, controller.Request.Url.ToString(), controller.Request.IsAuthenticated, controller.User.Identity.Name, controller.Request.AnonymousID, sb); _log.Debug(log); } return controller.Request.IsAuthenticated ? controller.User.Identity.Name : controller.Request.AnonymousID; } I can see in the log that user realy get the same cookievalue for .ASPXANONYMOUS even if they have different IP. Just to be safe I removed dependency injection for the FormsAuthentication. I dont use OutputCaching. My web.config has this setting for authentication: <anonymousIdentification enabled="true" cookieless="UseCookies" cookieName=".ASPXANONYMOUS" cookieTimeout="30" cookiePath="/" cookieRequireSSL="false" cookieSlidingExpiration="true" /> <authentication mode="Forms"> <forms loginUrl="~/de/Account/Login" /> </authentication> Does anybody have an idea what else I could log or what I should have a look at?

    Read the article

  • Problem reintegrating a branch into the trunk in Subversion 1.5

    - by pako
    I'm trying to reintegrate a development branch into the trunk in my Subversion 1.5 repository. I merged all the changes from the trunk to the development branch prior to this operation. Now when I try to reintegrate the changes from the branch I get the following error message: Command: Reintegrate merge https://dev/svn/branches/devel into C:\trunk Error: Reintegrate can only be used if revisions 280 through 325 were previously Error: merged from https://dev/svn/trunk to the reintegrate Error: source, but this is not the case: Error: branches/devel/images/test Error: Missing ranges: /trunk/images/test:280-324 ... The message then goes on complaining about some folders in my project. But when I try to merge the changes from the trunk to the development branch again, TortoiseSVN tells me that there's nothing to merge (as I already merged all the changes before): Command: Merging revisions 1-HEAD of https://dev/svn/trunk into C:\devel, respecting ancestry Completed: C:\devel I'm trying to follow the instructions from here: http://svnbook.red-bean.com/en/1.5/svn.branchmerge.basicmerging.html, but there's nothing about solving such a problem. Any ideas? Perhaps I should just delete the trunk and then make a copy of my branch? But I'm not really sure if it's safe.

    Read the article

  • WPF MVVM Chart change axes

    - by c0uchm0nster
    I'm new to WPF and MVVM. I'm struggling to determine the best way to change the view of a chart. That is, initially a chart might have the axes: X - ID, Y - Length, and then after the user changes the view (either via lisbox, radiobutton, etc) the chart would display the information: X - Length, Y - ID, and after a third change by the user it might display new content: X - ID, Y - Quality. My initial thought was that the best way to do this would be to change the bindings themselves. But I don't know how tell a control in XAML to bind using a Binding object in the ViewModel, or whether it's safe to change that binding in runtime? Then I thought maybe I could just have a generic Model that has members X and Y and populate them as needed in the viewmodel? My last thought was that I could have 3 different chart controls and just hide and show them as appropriate. What is the CORRECT/SUGGESTED way to do this in the MVVM pattern? Any code examples would be greatly appreciated. Thanks

    Read the article

  • Using IsolatedStorage on a IIS server

    - by JoeBilly
    I'am a bit confusing about the use of Isolated Storage on an IIS server. I understand the goal of Isolated Storage : provides a safe place to store data with no worry about how and where is this place. Since Isolated Storage has a by-user and by-assembly approach, I'am not to wild about using it on a IIS server where applications have almost their own identity. I haven't really seen the interest of impersonating a web application and almost never seen impersonated web applications myself but this is my point of view. Using Isolated Storage on a server mean : Using Isolated stores in \Documents and Settings\<user>\ Which mean \Documents and Settings\Default User\ when the application pool is owned by Local System or Network Services I guess Which also mean Write rights on this folder for Local System or Network Services Using of impersonation Regarding a web application (logic), these ideas are confusing me... Document and Settings ? Default User ? Enable impersonation just for storage ? No control about storage on server ? Uh ? And then I'am a front of a dilema : use System.IO.Packaging (with Isolated Storage inside) on web applications or find an alternative ? Am I wrong in my approach ? Did I miss something ? Any point of view is appreciated and an explanation about the Isolated Storage with IIS philosophy could be an anwser. Thanks !

    Read the article

  • Check if files in a directory are still being written in Windows Batch File

    - by FMFF
    Hello. Here's my batch file to parse a directory, and zip files of certain type REM Begin ------------------------ tasklist /FI "IMAGENAME eq 7za.exe" /FO CSV > search.log FOR /F %%A IN (search.log) DO IF %%~zA EQU 0 GOTO end for /f "delims=" %%A in ('dir C:\Temp\*.ps /b') do ( "C:\Program Files\7-Zip\cmdline\7za.exe" a -tzip -mx9 "C:\temp\Zip\%%A.zip" "C:\temp\%%A" Move "C:\temp\%%A" "C:\Temp\Archive" ) :end del search.log REM pause exit REM End --------------------------- This code works just fine for 90% of my needs. It will be deployed as a scheduled task. However, the *.ps files are rather large (minimum of 1GB) in real time cases. So the code is supposed to check if the incoming file is completely written and is not locked by the application that is writing it. I saw another example elsewhere, that suggested the following approach :TestFile ren c:\file.txt c:\file.txt if errorlevel 0 goto docopy sleep 5 goto TestFile :docopy However this example is good for a fixed file. How can I use that many labels and GoTo's inside a for loop without causing an infinite loop? Or is this code safe to be used in the For Loop? Thank you for any help.

    Read the article

  • Compile MonoDevelop 4.2.3

    - by user2942643
    I need help, I'm trying to compile monodevelop code, but when I use the command "./configure" tells me that I need to have installed a version of mono, but I have it installed [raven@localhost ~]$ mono -V Mono JIT compiler version 3.2.8 (tarball Fri May 30 08:15:47 CDT 2014) Copyright (C) 2002-2014 Novell, Inc, Xamarin Inc and Contributors. www.mono-project.com TLS: __thread SIGSEGV: altstack Notifications: epoll Architecture: amd64 Disabled: none Misc: softdebug LLVM: supported, not enabled. GC: sgen [raven@localhost ~]$ cd /home/raven/Downloads/monodevelop-4.2.3 [raven@localhost monodevelop-4.2.3]$ ./configure checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /usr/bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking how to create a ustar tar archive... gnutar checking whether to enable maintainer-specific portions of Makefiles... no checking for mono... /usr/local/bin/mono checking for gmcs... /usr/local/bin/gmcs checking for pkg-config... /usr/bin/pkg-config configure: error: You need mono 3.0.4 or newer [raven@localhost monodevelop-4.2.3]$

    Read the article

  • Vote on Pros and Cons of Java HTML to XML cleaners

    - by George Bailey
    I am looking to allow HTML emails (and other HTML uploads) without letting in scripts and stuff. I plan to have a white list of safe tags and attributes as well as a whitelist of CSS tags and value regexes (to prevent automatic return receipt). I asked a question: Parse a badly formatted XML document (like an HTML file) I found there are many many ways to do this. Some systems have built in sanitizers (which I don't care so much about). This page is a very nice listing page but I get kinda lost http://java-source.net/open-source/html-parsers It is very important that the parsers never throw an exception. There should always be best guess results to the parse/clean. It is also very important that the result is valid XML that can be traversed in Java. I posted some product information and said Community Wiki. Please post any other product suggestions you like and say Community Wiki so they can be voted on. Also any comments or wiki edits on what part of a certain product is better and what is not would be greatly appreciated. (for example,, speed vs accuracy..) It seems that we will go with either jsoup (seems more active and up to date) or TagSoup (compatible with JDK4 and been around awhile). A +1 for any of these products would be if they could convert all style sheets into inline style on the elements.

    Read the article

  • Authenticating to Google Search Appliance using Basic HTTP auth and ASP.NET (VB)

    - by Chainlink
    I've run into a snag though which has to do with authentication between the Google Search Appliance and ASP. Normally, when asking for secure pages from the search appliance, the search appliance asks for credentials, then uses these credentials to try and access the secure results. If this attempt is successful, the page shows up in the results list. Since ASP is contacting the search appliance on the client's behalf, it will need to collect credentials and pass them along to the search appliance. I have tried a couple of different documented ways of accomplishing this, but they don't seem to work. Below is the code I have tried: 'Bypass SSL since discovery.gov.mb.ca does not have valid SSL cert (NOT PRODUCTION SAFE) ServerCertificateValidationCallback = New System.Net.Security.RemoteCertificateValidationCallback(AddressOf customXertificateValidation) googleUrl = "https://removed.com" Dim rdr As New XmlTextReader(googleUrl) Dim resolver As New XmlUrlResolver() Dim myCred As New System.Net.NetworkCredential("USERNAME", "PASSWORD", Nothing) Dim credCache As New CredentialCache() credCache.Add(New Uri(googleUrl), "Basic", myCred) resolver.Credentials = credCache rdr.XmlResolver = resolver doc = New System.Xml.XPath.XPathDocument(rdr) path = doc.CreateNavigator() Private Function customXertificateValidation(ByVal sender As Object, ByVal certificate As System.Security.Cryptography.X509Certificates.X509Certificate, ByVal chain As System.Security.Cryptography.X509Certificates.X509Chain, ByVal sslPolicyErrors As Net.Security.SslPolicyErrors) As Boolean Return True End Function

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >