Search Results

Search found 20283 results on 812 pages for 'security context'.

Page 394/812 | < Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >

  • Printing to different printers using mozilla.

    - by Nick-ACNB
    I am currently creating a web application that will be deployed in an intranet environment. I chose firefox to be the browser that will run it. However, in the application I am building, I need to be able to print to different printers quickly since they use different paper size depending on what client is coming. To avoid many time-wasting mistakes that could occur, for instance someone choosing the wrong printer and wasting paper. Also, the time used to find the right printer for the job and then pressing print is considered too long in the current context. Is there any solution to this problem? I understand the potential security flaw behind this, but please be aware that this is solely an intranet project and that I can reduce the browser's security to the lowest since they don't access internet. I know there could be something doable behind IE (ActiveX or VBScript) but I am using firefox. Also, I guess there could also be something rather tricky that when you press print on the browser, it saves what needs to be printed to a DB and then there is an exe app that runs and fetch that DB every set ammount of time and print to the right printer. Any suggestion would be greatly appreciated. I doubt I am the only one to ever face this issue! :) Thank you very much.

    Read the article

  • View transform seems to be ignored when animating on iphone. Why?

    - by heymon
    I have views that can be rotated, scaled and moved around the screen. I want the view to resize and be orthogonal when the user double tap's the view to edit the textview within. The code is below. Somewhere the transform is getting reset. The NSLog statement below prints the identity transform, but a print of the transform when the animation is complete in transitionDidStop reveals that the transform is what it was before I thought I set it to identity. The view resizes, but it acts like the transform was never set to identity? Any ideas/pointers? originalBounds = self.bounds; originalCenter = self.center; originalTransform = self.transform; r = CGRectMake(0, 0, [UIScreen mainScreen].bounds.size.width - 70, 450); [UIView beginAnimations: nil context: NULL]; [UIView setAnimationDelegate: self]; [UIView setAnimationDidStopSelector: @selector(transitionDidStop:finished:context:)]; self.transform = CGAffineTransformIdentity; NSLog(@"%@ after set", NSStringFromCGAffineTransform([self transform])); [self setBounds: r]; [self setCenter: p]; [self setAlpha: 1.0]; [UIView commitAnimations]; [textView becomeFirstResponder];

    Read the article

  • Flex ChangeWatcher bind to a negative condition

    - by bedwyr
    I have a bindable getter in a component which informs me when a [hidden] timer is running. I also have a context menu which, if this timer is running, should disable one of the menu items. Is it possible to create a ChangeWatcher which watches for the negative condition of a bindable property/getter and changes the enabled property of the menu item? Here are the basic methods I'm trying to bind together: Class A: [Bindable] public function get isPlaying():Boolean { return (_timer != null) ? _timer.running : false; } Class B: private var _playingWatcher:ChangeWatcher; public function createContextMenu():void { //...blah blah, creating context menu var newItem:ContextMenuItem = new ContextMenuItem(); _playingWatcher = BindingUtils.bindProperty(newItem, "enabled", _classA, "isPlaying"); } In the code above, I have the inverse case: when isPlaying() is true, the menu item is enabled; I want it to only be enabled when the condition is false. I could create a second getter (there are other bindings which rely on the current getter) to return the inverse condition, but that sounds ugly to me: [Bindable] public function get isNotPlaying():Boolean { return !isPlaying; } Is this possible, or is there another approach I'm completely missing?

    Read the article

  • How does PATH environment affect my running executable from using msvcr90 to msvcr80 ???

    - by Runner
    #include <gtk/gtk.h> int main( int argc, char *argv[] ) { GtkWidget *window; gtk_init (&argc, &argv); window = gtk_window_new (GTK_WINDOW_TOPLEVEL); gtk_widget_show (window); gtk_main (); return 0; } I tried putting various versions of MSVCR80.dll under the same directory as the generated executable(via cmake),but none matched. Is there a general solution for this kinda problem? UPDATE Some answers recommend install the VS redist,but I'm not sure whether or not it will affect my installed Visual Studio 9, can someone confirm? Manifest file of the executable <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false"></requestedExecutionLevel> </requestedPrivileges> </security> </trustInfo> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.DebugCRT" version="9.0.21022.8" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"></assemblyIdentity> </dependentAssembly> </dependency> </assembly> It seems the manifest file says it should use the MSVCR90, why it always reporting missing MSVCR80.dll? FOUND After spending several hours on it,finally I found it's caused by this setting in PATH: D:\MATLAB\R2007b\bin\win32 After removing it all works fine.But why can that setting affect my running executable from using msvcr90 to msvcr80 ???

    Read the article

  • Client calls .asmx, Server exposes WCF endpoint

    - by Lucas B
    We have a client that has been configured to connect to an asmx service. We don't want to ask our customers to update their configuration, but we would like to upgrade our service to use WCF. Does anyone know if WCF supports this? If so, what would the configuration file look like? Our asmx service looks like this: <bindings> <binding name="ATransactionSoap" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="None"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="http://.../atransaction.asmx" binding="basicHttpBinding" bindingConfiguration="ATransactionSoap" contract="ATransactionSoap" name="ATransactionSoap" />

    Read the article

  • How to benchmark on multi-core processors

    - by Pascal Cuoq
    I am looking for ways to perform micro-benchmarks on multi-core processors. Context: At about the same time desktop processors introduced out-of-order execution that made performance hard to predict, they, perhaps not coincidentally, also introduced special instructions to get very precise timings. Example of these instructions are rdtsc on x86 and rftb on PowerPC. These instructions gave timings that were more precise than could ever be allowed by a system call, allowed programmers to micro-benchmark their hearts out, for better or for worse. On a yet more modern processor with several cores, some of which sleep some of the time, the counters are not synchronized between cores. We are told that rdtsc is no longer safe to use for benchmarking, but I must have been dozing off when we were explained the alternative solutions. Question: Some systems may save and restore the performance counter and provide an API call to read the proper sum. If you know what this call is for any operating system, please let us know in an answer. Some systems may allow to turn off cores, leaving only one running. I know Mac OS X Leopard does when the right Preference Pane is installed from the Developers Tools. Do you think that this make rdtsc safe to use again? More context: Please assume I know what I am doing when trying to do a micro-benchmark. If you are of the opinion that if an optimization's gains cannot be measured by timing the whole application, it's not worth optimizing, I agree with you, but I cannot time the whole application until the alternative data structure is finished, which will take a long time. In fact, if the micro-benchmark were not promising, I could decide to give up on the implementation now; I need figures to provide in a publication whose deadline I have no control over.

    Read the article

  • Messages not forwarded to error queue when exception is thrown in handler (it works on my machine)

    - by darthjit
    e are using NServicebus 4.0.5 with sql server(sql server 2012) as transport. When the handler throws an exception, NSB does not retry or move the message to the error queue. Successful messages make it to the audit queue but the failed/errored ones don't! . Interestingly, all this works on our local machines(windows 7 ,sql server localdb) but not on windows server 2012 (sql server 2012). Here is the config info on the subscriber: <add name="NServiceBus/Transport" connectionString="Data Source=xxx;Initial Catalog=NServiceBus;Integrated Security=SSPI;Enlist=false;" /> <add name="NServiceBus/Persistence" connectionString="Data Source=xxx;Initial Catalog=NServiceBus;Integrated Security=SSPI;Enlist=false;" /> <MessageForwardingInCaseOfFaultConfig ErrorQueue="error" /> <UnicastBusConfig ForwardReceivedMessagesTo="audit"> <MessageEndpointMappings> <add Assembly="Services.Section.Messages" Endpoint= "Services.ACL.Worker" /> </MessageEndpointMappings> </UnicastBusConfig> And in code it is configured as follows: public class EndpointConfig : IConfigureThisEndpoint, AsA_Server, IWantCustomInitialization { public void Init() { IContainer container = ContainerInstanceProvider. GetContainerInstance(); Configure .Transactions.Enable(); Configure.With() .AutofacBuilder(container) .UseTransport<SqlServer>() .Log4Net() //.Serialization.Json() .UseNHibernateSubscriptionPersister() .UseNHibernateTimeoutPersister() .MessageForwardingInCaseOfFault() .RijndaelEncryptionService() .DefiningCommandsAs(type => type.Namespace != null &&type .Namespace.EndsWith("Commands")) .DefiningEventsAs(type => type.Namespace != null &&type .Namespace.EndsWith("Events")) .UnicastBus(); } } Any ideas on how to fix this? here is the log info (there is a lot there, search for error to see the relevant parts) https://gist.github.com/ranji/7378249

    Read the article

  • Run Visual Studio with Administrator Rights using app.manifest [ExecutionLevel]

    - by srk
    I need to change the key in a registry in order to restrict the user from using Task Manager, since it is an Kiosk application. My code for changing the registry is working perfectly for Administrator account. But my application is going to be run in normal user account. When i tried to run my application in normal user account, i get the below error : DisableTaskManagerSystem.UnauthorizedAccessException: Access to the registry key 'HKey_Current_User\Software\Mictrosoft\Windows\CurrentVersion\Policies\System' is denied. at Microsoft.win32.RegistryKey.win32Error(int32 errorcode, String str) So i need to run my application with all administrator privileges. For which i am using the below app.manifest. But some how i getting the same error. How to overcome this ? Code in app.manifest : <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <ms_asmv2:trustInfo xmlns:ms_asmv2="urn:schemas-microsoft- com:asm.v2"> <ms_asmv2:security> <ms_asmv2:requestedPrivileges> <ms_asmv2:requestedExecutionLevel level="requireAdministrator" uiAccess="true"> </ms_asmv2:requestedExecutionLevel> </ms_asmv2:requestedPrivileges> </ms_asmv2:security> </ms_asmv2:trustInfo> </assembly>

    Read the article

  • ASP.NET Response Filter to Reformat the rendered output of ASPX pages?

    - by PropellerHead
    I've created a simple HttpModule and response stream to reformat the rendered output of web pages (see code snippets below). In the HttpModule I set the Response.Filter to my PageStream: m_Application.Context.Response.Filter = new PageStream(m_Application.Context); In the PageStream I overwrite the Write method in order to do my reformatting of the rendered output: public override void Write(byte[] buffer, int offset, int count) { string html = System.Text.Encoding.UTF8.GetString(buffer); //Do some string resplace operations here... byte[] input = System.Text.Encoding.UTF8.GetBytes(html); m_DefaultStream.Write(input, 0, input.Length); } And this work fine when using it on simple HTML pages (.html), but when I use this method on ASPX pages (.aspx), the Write method is called several times, splitting up the reformatting into different steps, and potentially destroying the string replacement operations. How do I solve this? Is there a way to let the ASPX page NOT call Write several times, e.g. by changing its buffer size, or have I chosen the wrong approach entirely, by using this Response.Filter method to manipulate the rendered output?

    Read the article

  • Is there a generic way of dealing with varying connection strings in C#?

    - by James Wiseman
    I have an application that needs to connect to a SQL database, and execute a SQL Agent Job. The connection string I am trying to access is stored in the registry, which is easily enough pulled out. This appliction is to be run on multiple computers, and I cannot guarantee the format of this connection string being consistent across these computers. Two that I have pulled out for example are: Data Source=Server1;Initial Catalog=DB1;Integrated Security=SSPI; Data Source=Server2;Initial Catalog=DB1;Provider=SQLNCLI.1;Integrated Security=SSPI;Auto Translate=False; I can use an object of type System.Data.SqlClient.SqlConnection to connect to the database with the first connection string, howevever, I get the following error when I pass the second to it: keyword not supported: 'provider' Similarly, I can use the an object of type System.Data.OleDb.OleDbConnection to connect to the database with the second connection string, howevever, I get the following error when I pass the first to it: An OLEDB Provider was not specified in the ConnectionString' I can solve this by scanning the string for 'Provider' and doing the connect conditionally, however I can't help but feel that there is a better way of doing this, and handle the connection strings in a more generic fashion. Does anyone have any suggestions?

    Read the article

  • Auto increment with a Unit Of Work

    - by Derick
    Context I'm building a persistence layer to abstract different types of databases that I'll be needing. On the relational part I have mySQL, Oracle and PostgreSQL. Let's take the following simplified MySQL tables: CREATE TABLE Contact ( ID varchar(15), NAME varchar(30) ); CREATE TABLE Address ( ID varchar(15), CONTACT_ID varchar(15), NAME varchar(50) ); I use code to generate system specific alpha numeric unique ID's fitting 15 chars in this case. Thus, if I insert a Contact record with it's Addresses I have my generated Contact.ID and Address.CONTACT_IDs before committing. I've created a Unit of Work (amongst others) as per Martin Fowler's patterns to add transaction support. I'm using a key based Identity Map in the UoW to track the changed records in memory. It works like a charm for the scenario above, all pretty standard stuff so far. The question scenario comes in when I have a database that is not under my control and the ID fields are auto-increment (or in Oracle sequences). In this case I do not have the db generated Contact.ID beforehand, so when I create my Address I do not have a value for Address.CONTACT_ID. The transaction has not been started on the DB session since all is kept in the Identity Map in memory. Question: What is a good approach to address this? (Avoiding unnecessary db round trips) Some ideas: Retrieve the last ID: I can do a call to the database to retrieve the last Id like: SELECT Auto_increment FROM information_schema.tables WHERE table_name='Contact'; But this is MySQL specific and probably something similar can be done for the other databases. If do this then would need to do the 1st insert, get the ID and then update the children (Address.CONTACT_IDs) – all in the current transaction context.

    Read the article

  • Main Function Error C++

    - by Arjun Nayini
    I have this main function: #ifndef MAIN_CPP #define MAIN_CPP #include "dsets.h" using namespace std; int main(){ DisjointSets s; s.uptree.addelements(4); for(int i=0; i<s.uptree.size(); i++) cout <<uptree.at(i) << endl; return 0; } #endif And the following class: class DisjointSets { public: void addelements(int x); int find(int x); void setunion(int x, int y); private: vector<int> uptree; }; #endif My implementation is this: void DisjointSets::addelements(int x){ for(int i=0; i<x; i++) uptree.push_back(-1); } //Given an int this function finds the root associated with that node. int DisjointSets::find(int x){ //need path compression if(uptree.at(x) < 0) return x; else return find(uptree.at(x)); } //This function reorders the uptree in order to represent the union of two //subtrees void DisjointSets::setunion(int x, int y){ } Upon compiling main.cpp (g++ main.cpp) I'm getting these errors: dsets.h: In function \u2018int main()\u2019: dsets.h:25: error: \u2018std::vector DisjointSets::uptree\u2019 is private main.cpp:9: error: within this context main.cpp:9: error: \u2018class std::vector \u2019 has no member named \u2018addelements\u2019 dsets.h:25: error: \u2018std::vector DisjointSets::uptree\u2019 is private main.cpp:10: error: within this context main.cpp:11: error: \u2018uptree\u2019 was not declared in this scope I'm not sure exactly whats wrong. Any help would be appreciated.

    Read the article

  • Django: How can I identify the calling view from a template?

    - by bryan
    Short version: Is there a simple, built-in way to identify the calling view in a Django template, without passing extra context variables? Long (original) version: One of my Django apps has several different views, each with its own named URL pattern, that all render the same template. There's a very small amount of template code that needs to change depending on the called view, too small to be worth the overhead of setting up separate templates for each view, so ideally I need to find a way to identify the calling view in the template. I've tried setting up the views to pass in extra context variables (e.g. "view_name") to identify the calling view, and I've also tried using {% ifequal request.path "/some/path/" %} comparisons, but neither of these solutions seems particularly elegant. Is there a better way to identify the calling view from the template? Is there a way to access to the view's name, or the name of the URL pattern? Update 1: Regarding the comment that this is simply a case of me misunderstanding MVC, I understand MVC, but Django's not really an MVC framework. I believe the way my app is set up is consistent with Django's take on MVC: the views describe which data is presented, and the templates describe how the data is presented. It just happens that I have a number of views that prepare different data, but that all use the same template because the data is presented the same way for all the views. I'm just looking for a simple way to identify the calling view from the template, if this exists. Update 2: Thanks for all the answers. I think the question is being overthought -- as mentioned in my original question, I've already considered and tried all of the suggested solutions -- so I've distilled it down to a "short version" now at the top of the question. And right now it seems that if someone were to simply post "No", it'd be the most correct answer :) Update 3: Carl Meyer posted "No" :) Thanks again, everyone.

    Read the article

  • how to change ASP.NET Configuration tool connection string

    - by Zviadi
    Hello, how can I change ASP.NET Configuration tool-s connection string name? (Which connection string will ASP.NET Configuration tool will use) I'm learning ASP.NET and everywhere and in book that I'm reading now theres connection string named: LocalSqlServer. I want to use my local sql server database instead of sql express to store Roles, Membership and other data. I have used aspnet_regsql.exe to create needed data structures in my database. after that I changed my web.config to look like: <connectionStrings> <remove name="LocalSqlServer"/> <add name="LocalSqlServer" connectionString="Server=(LOCAL); Database=MyDatabase;Integrated Security=True" providerName="System.Data.SqlClient" /> </connectionStrings> but when I run ASP.NET Configuration tool it says that: "The connection name 'ApplicationServices' was not found in the applications configuration or the connection string is empty." ASP.NET Configuration tool uses connection string named: ApplicationServices not LocalSqlServer. cause of that I have to modify web.config to: <connectionStrings> <add name="ApplicationServices" connectionString="Server=(LOCAL); Database=MyDatabase;Integrated Security=True" providerName="System.Data.SqlClient" /> </connectionStrings> and everything works fine. I wish to know why the hell my web site uses connection string named: ApplicationServices and all books and online documentations uses LocalSqlServer? and how to change it to LocalSqlServer? I have: Windows 7 Sql Server 2008 R2 Visual Studio 2010 Premium Project type is website

    Read the article

  • Handling form from different view and passing form validation through session in django

    - by Mo J. Mughrabi
    I have a requirement here to build a comment-like app in my django project, the app has a view to receive a submitted form process it and return the errors to where ever it came from. I finally managed to get it to work, but I have doubt for the way am using it might be wrong since am passing the entire validated form in the session. below is the code comment/templatetags/comment.py @register.inclusion_tag('comment/form.html', takes_context=True) def comment_form(context, model, object_id, next): """ comment_form() is responsible for rendering the comment form """ # clear sessions from variable incase it was found content_type = ContentType.objects.get_for_model(model) try: request = context['request'] if request.session.get('comment_form', False): form = CommentForm(request.session['comment_form']) form.fields['content_type'].initial = 15 form.fields['object_id'].initial = 2 form.fields['next'].initial = next else: form = CommentForm(initial={ 'content_type' : content_type.id, 'object_id' : object_id, 'next' : next }) except Exception as e: logging.error(str(e)) form = None return { 'form' : form } comment/view.py def save_comment(request): """ save_comment: """ if request.method == 'POST': # clear sessions from variable incase it was found if request.session.get('comment_form', False): del request.session['comment_form'] form = CommentForm(request.POST) if form.is_valid(): obj = form.save(commit=False) if request.user.is_authenticated(): obj.created_by = request.user obj.save() messages.info(request, _('Your comment has been posted.')) return redirect(form.data.get('next')) else: request.session['comment_form'] = request.POST return redirect(form.data.get('next')) else: raise Http404 the usage is by loading the template tag and firing {% comment_form article article.id article.get_absolute_url %} my doubt is if am doing the correct approach or not by passing the validated form to the session. Would that be a problem? security risk? performance issues? Please advise Update In response to Pol question. The reason why I went with this approach is because comment form is handled in a separate app. In my scenario, I render objects such as article and all I do is invoke the templatetag to render the form. What would be an alternative approach for my case? You also shared with me the django comment app, which am aware of but the client am working with requires a lot of complex work to be done in the comment app thats why am working on a new one.

    Read the article

  • Visual Studio 2008 Unit test does not pick up code changes unless I build the entire solution

    - by Orion Edwards
    Here's the scenario: Change my code: Change my unit test for that code With the cursor inside the unit test class/method, invoke VS2008's "Run tests in current context" command The visual studio "Output" window indicates that the code dll and the test dll both successfully build (in that order) The problem is however, that the unit test does not use the latest version of the dll which it has just built. Instead, it uses the previously built dll (which doesn't have the updated code in it), so the test fails. When adding a new method, this results in a MethodNotImplementedException, and when adding a class, it results in a TypeLoadException, both because the unit test thinks the new code is there, and it isn't!. If I'm just updating an existing method, then the test just fails due to incorrect results. I can 'work around' the problem by doing this Change my code: Change my unit test for that code Invoke VS2008's 'Build Solution' command With the cursor inside the unit test class/method, invoke VS2008's "Run tests in current context" command The problem is that doing a full build solution (even though nothing has changed) takes upwards of 30 seconds, as I have approx 50 C# projects, and VS2008 is not smart enough to realize that only 2 of them need to be looked at. Having to wait 30 seconds just to change 1 line of code and re-run a unit test is abysmal. Is there anything I can do to fix this? None of my code is in the GAC or anything funny like that, it's just ordinary old dll's (buiding against .NET 3.5SP1 on a win7/64bit machine) Please help!

    Read the article

  • C# IE Proxy Detection not working from a Windows Service

    - by mytwocents
    This is very odd. I've written a windows form tool in C# to return the default IE proxy. It works (See code snippets below). I've created a service to return the proxy using the same code, it works when I step into the debugger, BUT it doesn't work when the service is running as binary code. This must be some sort of permissions or context thing that I cannot recreate. Further, if I call the tool from the windows service, it still doesn't return the proxy information.... this seems isolated to the service somehow. I don't see any permission errors in the events log, it's as if a service has a totally different context than a regular windows form exe. I've tride both: WebRequest.GetSystemWebProxy() and Uri requestUri = new Uri(urlOfDomain); HttpWebRequest request = (HttpWebRequest)WebRequest.Create(requestUri); IWebProxy proxy = request.Proxy; if (!proxy.IsBypassed(requestUri)){ Uri uri2 = proxy.GetProxy(request.RequestUri); } Thanks for any help!

    Read the article

  • Validate a XDocument against schema without the ValidationEventHandler (for use in a HTTP handler)

    - by Vaibhav Garg
    Hi everyone, (I am new to Schema validation) Regarding the following method, System.Xml.Schema.Extensions.Validate( ByVal source As System.Xml.Linq.XDocument, ByVal schemas As System.Xml.Schema.XmlSchemaSet, ByVal validationEventHandler As System.Xml.Schema.ValidationEventHandler, ByVal addSchemaInfo As Boolean) I am using it as follows inside a IHttpHandler - Try Dim xsd As XmlReader = XmlReader.Create(context.Server.MapPath("~/App_Data/MySchema.xsd")) Dim schemas As New XmlSchemaSet() : schemas.Add("myNameSpace", xsd) : xsd.Close() myXDoxumentOdj.Validate(schemas, Function(s As Object, e As ValidationEventArgs) SchemaError(s, e, context), True) Catch ex1 As Threading.ThreadAbortException 'manage schema error' Return Catch ex As Exception 'manage other errors' End Try The handler- Function SchemaError(ByVal s As Object, ByVal e As ValidationEventArgs, ByVal c As HttpContext) As Object If c Is Nothing Then c = HttpContext.Current If c IsNot Nothing Then HttpContext.Current.Response.Write(e.Message) HttpContext.Current.Response.End() End If Return New Object() End Function This is working fine for me at present but looks very weak. I do get errors when I feed it bad XML. But i want to implement it in a more elegant way. This looks like it would break for large XML etc. Is there some way to validate without the handler so that I get the document validated in one go and then deal with errors? To me it looks Async such that the call to Validate() would pass and some non deterministic time later the handler would get called with the result/errors. Is that right? Thanks and sorry for any goofy mistakes :).

    Read the article

  • Java threads, wait time always 00:00:00-Producer/Consumer

    - by user3742254
    I am currently doing a producer consumer problem with a number of threads and have had to set priorities and waits to them to ensure that one thread, the security thread, runs last. I have managed to do this and I have managed to get the buffer working. The last thing that I am required to do is to show the wait time of threads that are too large for the buffer and to calculate the average wait time. I have included code to do so, but everything I run the program, the wait time is always returned as 00:00:00, and by extension, the average is returned as the same. I was speaking to one of my colleagues who said that it is not a matter of the code but rather a matter of the computer needing to work off of one processor, which can be adjusted in the task manager settings. He has an HP like myself but his program prints the wait time 180 times, whereas mine prints usually about 3-7 times and is only 00:00:01 on one instance before finishing when I have made the processor adjustments. My other colleague has an iMac and hers puts out an average of 42:00:34(42 minutes??) I am very confused about this because I can see no difference between our codes and like my colleague said, I was wondering is it a computer issue. I am obviously concerned as I wanted to make sure that my code correctly calculated an average wait time, but that is impossible to tell when the wait times always show as 00:00:00. To calculate the thread duration, including the time it entered and exited the buffer was done by using a timestamp import, and then subtracting start time from end time. Is my code correct for this issue or is there something which is missing? I would be very grateful for any solutions. Below is my code: My buffer class package com.Com813cw; import java.text.DateFormat; import java.text.SimpleDateFormat; /** * Created by Rory on 10/08/2014. */ class Buffer { private int contents, count = 0, process = 200; private int totalRam = 1000; private boolean available = false; private long start, end, wait, request = 0; private DateFormat time = new SimpleDateFormat("ss:SSS"); public int avWaitTime =0; public void average(){ System.out.println("Average Application Request wait time: "+ time.format(request/count)); } public synchronized int get() { while (process <= 500) { try { wait(); } catch (InterruptedException e) { } } process -= 200; System.out.println("CPU After Process " + process); notifyAll(); return contents; } public synchronized void put(int value) { if (process <= 500) { process += value; } else { start = System.currentTimeMillis(); try { wait(); } catch (InterruptedException e) { } end = System.currentTimeMillis(); wait = end - start; count++; request += wait; System.out.println("Application Request Wait Time: " + time.format(wait)); process += value; contents = value; calcWait(wait, count); } notifyAll(); } public void calcWait(long wait, int count){ this.avWaitTime = (int) (wait/count); } public void printWait(){ System.out.println("Wait time is " + time.format(this.avWaitTime)); } } My spotify class package com.Com813cw; import java.sql.Timestamp; /** * Created by Rory on 11/08/2014. */ class Spotify extends Thread { private Buffer buffer; private int number; private int bytes = 250; public Spotify(Buffer c, int number) { buffer = c; this.number = number; } long startTime = System.currentTimeMillis(); public void run() { for (int i = 0; i < 20; i++) { buffer.put(bytes); System.out.println(getName() + this.number + " put: " + bytes + " bytes "); try { sleep(1000); } catch (InterruptedException e) { } } long endTime = System.currentTimeMillis(); long timeTaken = endTime - startTime; java.util.Date date = new java.util.Date(); System.out.println("-----------------------------"); System.out.println("Spotify has finished executing."); System.out.println("Time taken to execute was " + timeTaken + " milliseconds"); System.out.println("Time that Spotify thread exited Buffer was " + new Timestamp(date.getTime())); System.out.println("-----------------------------"); } } My BubbleWitch class package com.Com813cw; import java.lang.*; import java.lang.System; import java.sql.Timestamp; /** * Created by Rory on 10/08/2014. */ class BubbleWitch2 extends Thread { private Buffer buffer; private int number; private int bytes = 100; public BubbleWitch2(Buffer c, int number) { buffer = c; this.number=number ; } long startTime = System.currentTimeMillis(); public void run() { for (int i = 0; i < 10; i++) { buffer.put(bytes); System.out.println(getName() + this.number + " put: " + bytes + " bytes "); try { sleep(1000); } catch (InterruptedException e) { } } long endTime = System.currentTimeMillis(); long timeTaken = endTime - startTime; java.util.Date date = new java.util.Date(); System.out.println("-----------------------------"); System.out.println("BubbleWitch2 has finished executing."); System.out.println("Time taken to execute was " +timeTaken+ " milliseconds"); System.out.println("Time Bubblewitch2 thread exited Buffer was " + new Timestamp(date.getTime())); System.out.println("-----------------------------"); } } My Test class package com.Com813cw; /** * Created by Rory on 10/08/2014. */ public class ProducerConsumerTest { public static void main(String[] args) throws InterruptedException { Buffer c = new Buffer(); BubbleWitch2 p1 = new BubbleWitch2(c,1); Processor c1 = new Processor(c, 1); Spotify p2 = new Spotify(c, 2); SystemManagement p3 = new SystemManagement(c, 3); SecurityUpdate p4 = new SecurityUpdate(c, 4, p1, p2, p3); p1.setName("BubbleWitch2 "); p2.setName("Spotify "); p3.setName("System Management "); p4.setName("Security Update "); p1.setPriority(10); p2.setPriority(10); p3.setPriority(10); p4.setPriority(5); c1.start(); p1.start(); p2.start(); p3.start(); p4.start(); p2.join(); p3.join(); p4.join(); c.average(); System.exit(0); } } My security update package com.Com813cw; import java.lang.*; import java.lang.System; import java.sql.Timestamp; /** * Created by Rory on 11/08/2014. */ class SecurityUpdate extends Thread { private Buffer buffer; private int number; private int bytes = 150; private int process = 0; public SecurityUpdate(Buffer c, int number, BubbleWitch2 bubbleWitch2, Spotify spotify, SystemManagement systemManagement) throws InterruptedException { buffer = c; this.number = number; bubbleWitch2.join(); spotify.join(); systemManagement.join(); } long startTime = System.currentTimeMillis(); public void run() { for (int i = 0; i < 15; i++) { buffer.put(bytes); System.out.println(getName() + this.number + " put: " + bytes + " bytes"); try { sleep(1500); } catch (InterruptedException e) { } } long endTime = System.currentTimeMillis(); long timeTaken = endTime - startTime; java.util.Date date = new java.util.Date(); System.out.println("-----------------------------"); System.out.println("Security Update has finished executing."); System.out.println("Time taken to execute was " + timeTaken + " milliseconds"); System.out.println("Time that SecurityUpdate thread exited Buffer was " + new Timestamp(date.getTime())); System.out.println("------------------------------"); } } I'd be grateful as I said for any help as this is the last and most frustrating obstacle.

    Read the article

  • LINQ to SQL: Reusable expression for property?

    - by coenvdwel
    Pardon me for being unable to phrase the title more exact. Basically, I have three LINQ objects linked to tables. One is Product, the other is Company and the last is a mapping table Mapping to store what Company sells which products and by which ID this Company refers to this Product. I am now retrieving a list of products as follows: var options = new DataLoadOptions(); options.LoadWith<Product>(p => p.Mappings); context.LoadOptions = options; var products = ( from p in context.Products select new { ProductID = p.ProductID, //BackendProductID = p.BackendProductID, BackendProductID = (p.Mappings.Count == 0) ? "None" : (p.Mappings.Count > 1) ? "Multiple" : p.Mappings.First().BackendProductID, Description = p.Description } ).ToList(); This does a single query retrieving the information I want. But I want to be able to move the logic behind the BackendProductID into the LINQ object so I can use the commented line instead of the annoyingly nested ternary operator statements for neatness and re-usability. So I added the following property to the Product object: public string BackendProductID { get { if (Mappings.Count == 0) return "None"; if (Mappings.Count > 1) return "Multiple"; return Mappings.First().BackendProductID; } } The list is still the same, but it now does a query for every single Product to get it's BackendProductID. The code is neater and re-usable, but the performance now is terrible. What I need is some kind of Expression or Delegate but I couldn't get my head around writing one. It always ended up querying for every single product, still. Any help would be appreciated!

    Read the article

  • Insert and delete SIM contacs working but it needs to be phone restart to update the changes

    - by girishgm08
    Hi All, I am able to insert the contacts into SIM card and delete from it. But it needs to be phone restart to update the changes. The below is the code woks for delete the conatcs, Uri simUri = Uri.parse("content://icc/adn"); Cursor cur = context.getContentResolver().query(simUri, null, null, null, null); prn("Number of SIM Contacts are.."+cur.getCount()); int row =0; while(cur.moveToNext()){ String name = cur.getString(cur.getColumnIndex("name")); prn("Name..."+name); String data = cur.getString(cur.getColumnIndex("number")); if(!data.equals("")) prn("Number.."+data); String where = null; if(!name.equals("") && !data.equals("")){ where = "tag =" + name + "AND" + "number =" +data; } else if(name.equals("") && !data.equals("")){ where = "number ="+data; } else { where = "tag ="+name+ "AND" +"number="+null; } context.getContentResolver().delete(simUri, where, null); row++; } prn(row+" are deleted"); cur.close(); cur = null; Please look into this issue and give suggestions on this. Thanks, Girish G M

    Read the article

  • Newbie: Render RGB to GTK widget -- howto?

    - by Billy Pilgrim
    Hi All, Big picture: I want to render an RGB image via GTK on a linux box. I'm a frustrated GTK newbie, so please forgive me. I assume that I should create a Drawable_area in which to render the image -- correct? Do I then have to create a graphics context attached to that area? How? my simple app (which doesn't even address the rgb issue yet is this: int main(int argc, char** argv) { GdkGC * gc = NULL; GtkWidget * window = NULL; GtkDrawingArea * dpage = NULL; GtkWidget * page = NULL; gtk_init( &argc, & argv ); window = gtk_window_new( GTK_WINDOW_TOPLEVEL ); page = gtk_drawing_area_new( ); dpage = GTK_DRAWING_AREA( page ); gtk_widget_set_size_request( page, PAGE_WIDTH, PAGE_HEIGHT ); gc = gdk_gc_new( GTK_DRAWABLE( dpage ) ); gtk_widget_show( window ); gtk_main(); return (EXIT_SUCCESS); } my dpage is apparently not a 'drawable' (though it is a drawing area). I am confused as to a) how do I get/create the graphics context which is required in subsequent function calls? b) am I close to a solution, or am I so completely *#&@& wrong that there is no hope c) a baby steps tutorial. (I started with hello world as my base, so I got that far). any and all help appreciated. bp

    Read the article

  • Streaming content to JSF UI

    - by Mark Lewis
    Hello, I was quite happy with my JSF app which read the contents of MQ messages received and supplied them to the UI like this: <rich:panel> <snip> <rich:panelMenuItem label="mylabel" action="#{MyBacking.updateCurrent}"> <f:param name="current" value="mylog.log" /> </rich:panelMenuItem> </snip> </rich:panel> <rich:panel> <a4j:outputPanel ajaxRendered="true"> <rich:insert content="#{MyBacking.log}" highlight="groovy" /> </a4j:outputPanel> </rich:panel> and in MyBacking.java private String logFile = null; ... public String updateCurrent() { FacesContext context=FacesContext.getCurrentInstance(); setCurrent((String)context.getExternalContext().getRequestParameterMap().get("current")); setLog(getCurrent()); return null; } public void setLog(String log) { sendMsg(log); msgBody = receiveMsg(moreargs); logFile = msgBody; } public String getLog() { return logFile; } until the contents of one of the messages was too big and tomcat fell over. Obviously, I thought, I need to change the way it works so that I return some form of stream so that no one object grows so big that the container dies and the content returned by successive messages is streamed to the UI as it comes in. Am I right in thinking that I can replace the work I'm doing now on a String object with a BufferedOutputStream object ie no change to the JSF code and something like this changing at the back end: private BufferedOutputStream logFile = null; public void setLog(String log) { sendMsg(args); logFile = (BufferedOutputStream) receiveMsg(moreargs); } public String getLog() { return logFile; }

    Read the article

  • why is a minus sign prepended to my biginteger?

    - by kyrogue
    package ewa; import java.io.UnsupportedEncodingException; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import java.util.logging.Level; import java.util.logging.Logger; import java.math.BigInteger; /** * * @author Lotus */ public class md5Hash { public static void main(String[] args) throws NoSuchAlgorithmException { String test = "abc"; MessageDigest md = MessageDigest.getInstance("MD5"); try { md.update(test.getBytes("UTF-8")); byte[] result = md.digest(); BigInteger bi = new BigInteger(result); String hex = bi.toString(16); System.out.println("Pringting result"); System.out.println(hex); } catch (UnsupportedEncodingException ex) { Logger.getLogger(md5Hash.class.getName()).log(Level.SEVERE, null, ex); } } } i am testing conversion of byte to hex and when done, the end result has a minus sign on the beginning of the string, why does this happen? i have read the docs and it says it will add a minus sign, however i do not understand it. And will the minus sign affect the hash result? because i am going to implement it to hash password stored on my database

    Read the article

  • Entity Framework - Many to Many Subquery

    - by Jorin
    I asked a question about this previously but my database structure has changed, and while it made other things simpler, now this part is more complicated. Here is the previous question. At the time, my EF Context had a UsersProjects object because there were other properties. Now that I've simplified that table, it is just the keys, so all my EF context knows about is Users and Projects and the M2M relationship between them. There is no more UsersProjects as far as EF knows. So my goal is to say "show me all the users who are working on projects with me." in SQL, this would go something like: SELECT * FROM Users INNER JOIN UsersProjects ON Users.ID=UsersProjects.UserID WHERE ProjectID IN (SELECT ProjectID FROM UsersProjects WHERE UserID=@UserID) and I started in EF with something like this: var myProjects = (from p in edmx.Projects where p.Users.Contains(edmx.Users.FirstOrDefault(u => u.Email == UserEmail)) orderby p.Name select p).ToList(); var associatedUsers = (from u in edmx.Users where myProjects.Contains(?????????) //where myProjects.Any(????????) select u); The trick is finding what to put in the ????????. Anyone help here?

    Read the article

< Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >