Search Results

Search found 4860 results on 195 pages for 'parallel extensions'.

Page 162/195 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • BITS, TakeOwnership, and Kerberos / Windows Integrated Authentication

    - by Charlie Flowers
    We're using BITS to upload files from machines in our retail locations to our servers. BITS will stop transferring a file if the user who owns the BITS job logs off. Therefore, we're using a Windows Service running as LocalSystem to submit the jobs to BITS and be the job owner. This allows transfers to continue 24/7. However, it raises a question about authentication. We want the BITS server extensions in IIS to use Kerberos to authenticate the client machine. As far as I can tell, that leaves us with only 2 options, both of which are not ideal: Either we create an "ImageUploader" account and store its username/password in a config file that the Windows Service uses as credentials for the BITS job, or we ask the logged on user who creates the BITS job for his password, and then use his credentials for the BITS job. I guess the third option is not to use Kerberos, and maybe go with Basic Auth plus SSL. I'm sure I'm wrong and there's a better option. Is there? Thanks in advance.

    Read the article

  • Hiding Command Prompt with CodeDomProvider

    - by j-t-s
    Hi All I've just got my own little custom c# compiler made, using the article from MSDN. But, when I create a new Windows Forms application using my sample compiler, the MSDOS window also appears, and if I close the DOS window, my WinForms app closes too. How can I tell the Compiler? not to show the MSDOS window at all? Thank you :) Here's my code: using System; namespace JTS { public class CSCompiler { protected string ot, rt, ss, es; protected bool rg, cg; public string Compile(String se, String fe, String[] rdas, String[] fs, Boolean rn) { System.CodeDom.Compiler.CodeDomProvider CODEPROV = System.CodeDom.Compiler.CodeDomProvider.CreateProvider("CSharp"); ot = fe; System.CodeDom.Compiler.CompilerParameters PARAMS = new System.CodeDom.Compiler.CompilerParameters(); // Ensure the compiler generates an EXE file, not a DLL. PARAMS.GenerateExecutable = true; PARAMS.OutputAssembly = ot; PARAMS.CompilerOptions = "/target:winexe"; PARAMS.ReferencedAssemblies.Add(typeof(System.Xml.Linq.Extensions).Assembly.Location); PARAMS.LinkedResources.Add("this.ico"); foreach (String ay in rdas) { if (ay.Contains(".dll")) PARAMS.ReferencedAssemblies.Add(ay); else { string refd = ay; refd = refd + ".dll"; PARAMS.ReferencedAssemblies.Add(refd); } } System.CodeDom.Compiler.CompilerResults rs = CODEPROV.CompileAssemblyFromFile(PARAMS, fs); if (rs.Errors.Count > 0) { foreach (System.CodeDom.Compiler.CompilerError COMERR in rs.Errors) { es = es + "Line number: " + COMERR.Line + ", Error number: " + COMERR.ErrorNumber + ", '" + COMERR.ErrorText + ";" + Environment.NewLine + Environment.NewLine; } } else { // Compilation succeeded. es = "Compilation Succeeded."; if (rn) System.Diagnostics.Process.Start(ot); } return es; } } }

    Read the article

  • Hiding Command Prompt with CodeDomProvider

    - by j-t-s
    Hi All I've just got my own little custom c# compiler made, using the article from MSDN. But, when I create a new Windows Forms application using my sample compiler, the MSDOS window also appears, and if I close the DOS window, my WinForms app closes too. How can I tell the Compiler? not to show the MSDOS window at all? Thank you :) Here's my code: using System; namespace JTS { public class CSCompiler { protected string ot, rt, ss, es; protected bool rg, cg; public string Compile(String se, String fe, String[] rdas, String[] fs, Boolean rn) { System.CodeDom.Compiler.CodeDomProvider CODEPROV = System.CodeDom.Compiler.CodeDomProvider.CreateProvider("CSharp"); ot = fe; System.CodeDom.Compiler.CompilerParameters PARAMS = new System.CodeDom.Compiler.CompilerParameters(); // Ensure the compiler generates an EXE file, not a DLL. PARAMS.GenerateExecutable = true; PARAMS.OutputAssembly = ot; PARAMS.CompilerOptions = "/target:winexe"; PARAMS.ReferencedAssemblies.Add(typeof(System.Xml.Linq.Extensions).Assembly.Location); PARAMS.LinkedResources.Add("this.ico"); foreach (String ay in rdas) { if (ay.Contains(".dll")) PARAMS.ReferencedAssemblies.Add(ay); else { string refd = ay; refd = refd + ".dll"; PARAMS.ReferencedAssemblies.Add(refd); } } System.CodeDom.Compiler.CompilerResults rs = CODEPROV.CompileAssemblyFromFile(PARAMS, fs); if (rs.Errors.Count > 0) { foreach (System.CodeDom.Compiler.CompilerError COMERR in rs.Errors) { es = es + "Line number: " + COMERR.Line + ", Error number: " + COMERR.ErrorNumber + ", '" + COMERR.ErrorText + ";" + Environment.NewLine + Environment.NewLine; } } else { // Compilation succeeded. es = "Compilation Succeeded."; if (rn) System.Diagnostics.Process.Start(ot); } return es; } } }

    Read the article

  • Problem installing mysql gem on Snow Leopard: uninitialized constant MysqlCompat::MysqlRes

    - by emson
    Hi All I've got a problem trying to install the Ruby mysql gem driver. I recently upgraded to Snow Leopard and did the Hivelogic manual install of MySQL. This all seems to work fine as I can access mysql from the command line and make changes to the database. My problem is that if I now use rake db:migrate I get: rake aborted! uninitialized constant MysqlCompat::MysqlRes (See full trace by running task with --trace) Now it appears that my mysql gem isn't working correctly as I can access MySQL fine from Python using the Python driver (which I compiled to). I therefore tried to rebuild the gem using the following command from this site: http://techliberty.blogspot.com/, (incidentally I am using a recent Intel MacBook Pro): sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/usr/local/mysql/bin/mysql_config This compiles although I get No definition for the documentation: Building native extensions. This could take a while... Successfully installed mysql-2.8.1 1 gem installed Installing ri documentation for mysql-2.8.1... No definition for next_result No definition for field_name ... I'm a little stumped as my mysql_config is located in the correct place: /usr/local/mysql/bin/mysql_config And I have removed all other instances of the mysql gem, from my system. Any suggestions would be greatly appreciated. Many thanks. PS I saw this previous post http://stackoverflow.com/questions/1332207/uninitialized-constant-mysqlcompatmysqlres-using-mms2r-gem but it doesn't seem applicable for my version.

    Read the article

  • chrome extension login security with iframe

    - by Weaver
    I should note, I'm not a chrome extension expert. However, I'm looking for some advice or high level solution to a security concern I have with my chrome extension. I've searched quite a bit but can't seem to find a concrete answer. The situation I have a chrome extension that needs to have the user login to our backend server. However, it was decided for design reasons that the default chrome popup balloon was undesirable. Thus I've used a modal dialog and jquery to make a styled popup that is injected with content scripts. Hence, the popup is injected into the DOM o the page you are visiting. The Problem Everything works, however now that I need to implement login functionality I've noticed a vulnerability: If the site we've injected our popup into knows the password fields ID they could run a script to continuously monitor the password and username field and store that data. Call me paranoid, but I see it as a risk. In fact,I wrote a mockup attack site that can correctly pull the user and password when entered into the given fields. My devised solution I took a look at some other chrome extensions, like Buffer, and noticed what they do is load their popup from their website and, instead, embed an iFrame which contains the popup in it. The popup would interact with the server inside the iframe. My understanding is iframes are subject to same-origin scripting policies as other websites, but I may be mistaken. As such, would doing the same thing be secure? TLDR To simplify, if I embedded an https login form from our server into a given DOM, via a chrome extension, are there security concerns to password sniffing? If this is not the best way to deal with chrome extension logins, do you have suggestions with what is? Perhaps there is a way to declare text fields that javascript can simply not interact with? Not too sure! Thank you so much for your time! I will happily clarify anything required.

    Read the article

  • How to share data between SSRS Security and Data Processing extension?

    - by user2904681
    I've spent a lot of time trying to solve the issue pointed in title and have no found a solution yet. I use MS SSRS 2012 with custom Security (based on Form Authentication and ClaimsPrincipal) and Data Processing extensions. In Data extension level I need to apply filter programmatically based on one of the claim which I have access in Security extension level only. Here is the problem: I do know how to pass the claim from Security to Data Processing extension code... What I've tried: IAuthenticationExtension.LogonUser(string userName, string password, string authority) { ... ClaimsPrincipal claimsPrincipal = CreateClaimsPrincipal(...); Thread.CurrentPrincipal = claimsPrincipal; HttpContext.Current.User = claimsPrincipal; ... }; But it doesn't work. It seems SSRS overrides it within either GenericPrincipal or FormsIdentity internally. The possible workaround I'm thinking about (but haven't checked it yet): 1. Create HttpModule which will create HttpContext with all required information (minus: will be invoke getting claims each time - huge operation) 2. Write to custom SQL table to store logged users information which is required for Data extension and then read it 3. try somehow to append to cookies due to LogOn and then read each time on IAuthenticationExtension.GetUserInfo and fill HttpContext None of them seems to be a good solution. I would be grateful for any help/advise/comments.

    Read the article

  • Using Rx to synchronize asynchronous events

    - by Martin Liversage
    I want to put Reactive Extensions for .NET (Rx) to good use and would like to get some input on doing some basic tasks. To illustrate what I'm trying to do I have a contrived example where I have an external component with asyncronous events: class Component { public void BeginStart() { ... } public event EventHandler Started; } The component is started by calling BeginStart(). This method returns immediately, and later, when the component has completed startup, the Started event fires. I want to create a synchronous start method by wrapping the component and wait until the Started event is fired. This is what I've come up with so far: class ComponentWrapper { readonly Component component = new Component(); void StartComponent() { var componentStarted = Observable.FromEvent<EventArgs>(this.component, "Started"); using (var startedEvent = new ManualResetEvent(false)) using (componentStarted.Take(1).Subscribe(e => { startedEvent.Set(); })) { this.componenet.BeginStart(); startedEvent.WaitOne(); } } } I would like to get rid of the ManualResetEvent, and I expect that Rx has a solution. But how?

    Read the article

  • Communication between EJB3 Instances (JEE inter-bean communication) possible?

    - by Hank
    I'm designing a part of a JEE6 application, consisting of EJB3 beans. Part of the requirements are multiple parallel (say a few hundred) long running (over days) database hunts. Individual hunts have different search parameters (start time, end time, query filter). Parameters may get changed over time. Currently I'm thinking of the following: SearchController (Stateless Session Bean) formulates a set of search parameters, sends it off to a SearchListener via JMS SearchListener (Message Driven Bean) receives search parameters, instantiates a SearchWorker with the parameters SearchWorker (SLSB) hunts repeatedly through the database; when it finds something, the result is sent off via JMS, and the search continues; when the given 'end-time' has reached, it ends What I'm wondering now: Is there a problem, with EJB3 instances running for days? (Other than that I need to be able to deal with container restarts...) How do I know how many and which EJB instances of SearchWorker are currently running? Is it possible to communicate with them individually (similar to sending a System V signal to a unix process), e.g. to send new parameters, to end an instance, etc..

    Read the article

  • C# compiler error CS0006: metadata file is not found

    - by Rob
    I've built a c# compiler using the tutorial on MSDN and a few other resources including here, and I've gotten it to work until I add additional reference assemblies. My errors stem from adding "System.dll" and "System.Windows.Forms.dll" to the ReferenceAssemblies list. here's my code: private void SetUpCompilingParameters() { string ver = string.Format("{0}.{1}.{2}", Environment.Version.Major, Environment.Version.MajorRevision, Environment.Version.Build); string libDir = string.Format(@"{0}", Environment.CurrentDirectory); string raDir = @"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.0"; string exWpfDir = string.Format(@"C:\WINDOWS\Microsoft.NET\Framework\v{0}\WPF", ver); string exDir = string.Format(@"C:\WINDOWS\Microsoft.NET\Framework\v{0}", ver); MyCompiler = new CSharpCodeProvider(); CompilingParam = new CompilerParameters(); CompilingParam.GenerateExecutable = false; CompilingParam.GenerateInMemory = true; CompilingParam.IncludeDebugInformation = false; CompilingParam.TreatWarningsAsErrors = false; CompilingParam.CompilerOptions = string.Format("/lib:{0}", libDir); CompilingParam.CompilerOptions = string.Format("/lib:{0}", raDir); CompilingParam.CompilerOptions = string.Format("/lib:{0}", exDir); //CompilingParam.CompilerOptions = string.Format("/lib:{0}", exWpfDir); CompilingParam.ReferencedAssemblies.Add("System.dll"); CompilingParam.ReferencedAssemblies.Add("System.Windows.Forms.dll"); } As you can see, I've explicitly referenced directories in CompilerOptions but its not helping. I'd like to test the solution on here on stackoverflow that utilizes: CompilingParam.ReferencedAssemblies.Add(typeof(System.Xml.Linq.Extensions).Assembly.Location); but I'm having trouble using it for the general System.dll etc...

    Read the article

  • Can't Install libcurl PHP on Ubuntu Linux

    - by FranticPedantic
    I am trying to use the new facebook api and it requires libcurl PHP. I used sudo apt-get install php5-curl sudo apachectl -k restart And it didn't work. I get the same error and the phpinfo() page says nothing about libcurl. The source of this problem is probably that I built some of the tools from source (apache2, php), but then I got bored so installed a lot of the extensions with the package manager. But I'm not exactly how to go about diagnosing the point of failure. The apt-get install for curl definitely worked, and can be found in /usr/lib/php5/20060613/curl.so I think a lot of my confusion stems from not knowing which files go where, and what purpose they have. Any help would be appreciated, and please tell me if I need to provide more information. edit: The specific error I get is: Exception: Facebook needs the CURL PHP extension. from line if (!function_exists('curl_init')) { throw new Exception('Facebook needs the CURL PHP extension.'); } Ubuntu: 9.10 PHP: 5.2.13 Loaded Configuration File: /etc/php5/apache2/php.ini

    Read the article

  • SQL Server insert performance

    - by Jose
    I have an insert query that gets generated like this INSERT INTO InvoiceDetail (LegacyId,InvoiceId,DetailTypeId,Fee,FeeTax,Investigatorid,SalespersonId,CreateDate,CreatedById,IsChargeBack,Expense,RepoAgentId,PayeeName,ExpensePaymentId,AdjustDetailId) VALUES(1,1,2,1500.0000,0.0000,163,1002,'11/30/2001 12:00:00 AM',1116,0,550.0000,850,NULL,@ExpensePay1,NULL); DECLARE @InvDetail1 INT; SET @InvDetail1 = (SELECT @@IDENTITY); This query is generated for only 110K rows. It takes 30 minutes for all of these query's to execute I checked the query plan and the largest % nodes are A Clustered Index Insert at 57% query cost which has a long xml that I don't want to post. A Table Spool which is 38% query cost <RelOp AvgRowSize="35" EstimateCPU="5.01038E-05" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="1" LogicalOp="Eager Spool" NodeId="80" Parallel="false" PhysicalOp="Table Spool" EstimatedTotalSubtreeCost="0.0466109"> <OutputList> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvoiceId" /> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvestigatorId" /> <ColumnReference Column="Expr1054" /> <ColumnReference Column="Expr1055" /> </OutputList> <Spool PrimaryNodeId="3" /> </RelOp> So my question is what is there that I can do to improve the speed of this thing? I already run ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL Before the queries and then ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL after the queries. And that didn't shave off hardly anything off of the time. Know I am running these queries in a .NET application that uses a SqlCommand object to send the query. I then tried to output the sql commands to a file and then execute it using sqlcmd, but I wasn't getting any updates on how it was doing, so I gave up on that. Any ideas or hints or help?

    Read the article

  • Why aren't google api clients built on top of Apache's Abdera project ?

    - by lisak
    Hey, Could anybody please explain that to me? As far as I can see, the developers of java's google api client library are reinventing the wheel. It's like writing a new JDK for a Java project. I'm aware of the fact that google data protocol is a little specific re atom publishing, but if one needs to use some of the fancy extensions and features that Apache Abdera project offers for this protocol, it is better not to use google api client library and implement the client from scratch with Abdera... And I'm sure that in a lot of cases its features such as Abdera's JCR adapter would become very handy for google docs, google translator toolkit and others. Now it's great that there is a google api client library to be used for google docs, but what am I going to do with the documents? I believe that in more than a half cases there is also a repository or database on the other side. And in that case, abdera is needed, not the simple google api clients that are only marshalling/unmarshalling the feeds... In fact, there is something to persist in all of the google APIs. It would make sense, if google decided to invest the effort into Abdera enhancement... This doesn't... Also for the question to be more specific: How are you developing google api clients, that need entry persistence (JCR for instance) ? What would be the best way to integrate a google api client library with Apache Abdera ?

    Read the article

  • Error while opening port in Java

    - by Deepak
    I am getting the following error while trying to open the cash drawer. Error loading win32com: java.lang.UnsatisfiedLinkError: C:\Program Files\Java\jdk1.6.0_15\jre\bin\win32com.dll: Can't load IA 32-bit .dll on a AMD 64-bit platform The code i am using is as follows `import javax.comm.; import java.util.; /** Check each port to see if it is open. **/ public class openPort { public static void main (String [] args) { Enumeration port_list = CommPortIdentifier.getPortIdentifiers (); while (port_list.hasMoreElements ()) { // Get the list of ports CommPortIdentifier port_id = (CommPortIdentifier) port_list.nextElement (); // Find each ports type and name if (port_id.getPortType () == CommPortIdentifier.PORT_SERIAL) { System.out.println ("Serial port: " + port_id.getName ()); } else if (port_id.getPortType () == CommPortIdentifier.PORT_PARALLEL) { System.out.println ("Parallel port: " + port_id.getName ()); } else System.out.println ("Other port: " + port_id.getName ()); // Attempt to open it try { CommPort port = port_id.open ("PortListOpen",20); System.out.println (" Opened successfully"); port.close (); } catch (PortInUseException pe) { System.out.println (" Open failed"); String owner_name = port_id.getCurrentOwner (); if (owner_name == null) System.out.println (" Port Owned by unidentified app"); else // The owner name not returned correctly unless it is // a Java program. System.out.println (" " + owner_name); } } } //main } // PortListOpen`

    Read the article

  • JQuery Deferred - Adding to the Deferred contract

    - by MgSam
    I'm trying to add another asynchronous call to the contract of an existing Deferred before its state is set to success. Rather than try and explain this in English, see the following pseudo-code: $.when( $.ajax({ url: someUrl, data: data, async: true, success: function (data, textStatus, jqXhr) { console.log('Call 1 done.') jqXhr.pipe( $.ajax({ url: someUrl, data: data, async: true, success: function (data, textStatus, jqXhr) { console.log('Call 2 done.'); }, }) ); }, }), $.ajax({ url: someUrl, data: data, async: true, success: function (data, textStatus, jqXhr) { console.log('Call 3 done.'); }, }) ).then(function(){ console.log('All done!'); }); Basically, Call 2 is dependent on the results of Call 1. I want Call 1 and Call 3 to be executed in parallel. Once all 3 calls are complete, I want the All Done code to execute. My understanding is that Deferred.pipe() is supposed to chain another asynchronous call to the given deferred, but in practice, I always get Call 2 completing after All Done. Does anyone know how to get jQuery's Deferred to do what I want? Hopefully the solution doesn't involve ripping the code apart into chunks any further. Thanks for any help.

    Read the article

  • Steps to Investigate Cause of Web.Config Duplicate Section

    - by pauly
    Symptoms In IIS Dot Net 2.0 Integrated app pool: double clicking to view any web.config section results in a the following error dialog. "There was an error while performing this operation.... Fielname... web.config... Error: There is a duplicate..." Browsing to the URL displays: "Http 500.19" internal server error.. There is a duplicate... 'system.web.extensions/scripting/scriptResourceHandler' section defined...." Running the app from VS 2008 an "Unable to start debugging on the web server..." dialog is displayed. Things Tried Looked at other application directories on same IIS server. No problem view web.config contents or serving up the app. Removed and re-added the application in IIS. Checked out a new version of the source code. Reverted to prior versions of the web.config file. Looked for web.config files that might have duplicate sections in: Inetpub root. "C:\Windows\Microsoft.NET\Framework\v2.0.50727\CONFIG\machine.config" The "Views" subfolder of the ASP.Net MVC app. Checked out source code to another dev machine. Setup IIS 7 app folder. No problem with Web.config. Question If the reason for this error is another web.config file where else should I look? Are there other reasons for these symptoms?

    Read the article

  • Accessing previous activity instances in a sequence activity

    - by Dan Revell
    This has a rather SharePoint spin to it but the problem is straight workflow. I've got a parallel replication activity which contains a sequence activity. The sequence activity contains a CreateTask activity, a CodeActivity, a OnTaskChanged activity and finally a CompleteTask activity. The idea is to create a task for each username passed into the ReplicatorActivity.InitialChildData property. Typically in workflow I bind a field to the CreateTask.TaskId and CreateTask.TaskProperties and inside the CreateTask.MethodInvoking I set these through the local bound fields. This works and my tasks all get created properly. However in the CodeActivity that follows, I want to then access the TaskProperties. The problem I am encountering is that this field holds the values of the final task to be created as the CreateTask runs for all the replications before the CodeActivity gets to runs. From the CodeActivity, here are two ways I've tried to access the CreateTask activity instance from the same context or instance or whatever the terminology is for the replicated sequence. CreateTask task = ((CreateTask)sender.Parent.GetActivityByName("createSoftwareRequestTask", true)); CreateTask createTask = (CreateTask)sender.Parent.Activities[0]; Unfortunately the CreateTask activities both refer back to the last task to be created and not the task from the context that the CodeActivity is executing within. Two reasons why this might be that I can think of. I'm not accessing the correct instance with my code. I am accessing the correct instance, but as the properties I require were bound to and set through local fields then their previous data was overwritten. I'm hitting a brick wall with my understanding of workflow with this and would very much appreciate some assistance here.

    Read the article

  • Should I use formal methods on my software project?

    - by Michael
    Our client wants us to build a web-based, rich internet application for gathering software requirements. Basically it's a web-based case tool that follows a specific process for getting requirements from stakeholders. I'm the project manager and we're still in the early phases of the project. I've been thinking about using formal methods to help clarify the requirements for the tool for both my client and the developers. By formal methods I mean some form of modeling, possibly something mathematically-based. Some of the things I've read about and are considering include Z (http://en.wikipedia.org/wiki/Z_notation), state machines, UML 2.0 (possibly with extensions such as OCL), Petri nets, and some coding-level stuff like contracts and pre and post conditions. Is there anything else I should consider? The developers are experienced but depending on the formalism used they may have to learn some math. I'm trying to determine whether it's worth while for me to use formal methods on this project and if so, to what extent. I know "it depends" so the most helpful answers for me is a yes/no and supporting arguments. Would you use formal methods if you were on this project?

    Read the article

  • Automated UAT/functional tests on Swing applications without source code

    - by jas
    Our team is now working on a big Swing application. Our job basically focuses on writing extensions to the existing framework. A typical job would be adding a new panel/ or adding a new tab with some extra functionalities that suit our need. It seems FEST can help a lot in terms of unit-test our code. I am going to try it out this week. But the question here is if there is a way to do automated functional testing on the whole application. In another word, we do not only need to test our code but also the framework. After all, UAT is the most important part. I am currently considering decompiling the jar files we got into source code then we can identify the components and then use FEST. So, before I get started to give this approach a shot, I think I just ask for ideas and inspirations here. There must be people who have done similar things before. Would be nice if I could learn from the veterans who fought against this before . Thanks,

    Read the article

  • Run a PHP script every second using CLI

    - by Saif Bechan
    Hello, I have a dedicated server running Cent OS with a Parallel PLESK panel. I need to run a php script every second, that updates my database. These is no alternative way timewise, i have checked every method, it needs to be updated every second. I can find my script using the url: http://www.mysite.com/phpfile.php?key=123, and this has to be executed every second. Does anyone have any knowledge at all on doing this, i can not seem to find the answer. I heard about doing it with CLI and putty, but i have no knowledge of this at all. Or can this be done using the PLESK Panel? And can the file be executed locally every second. Like \phpfile.php If someone helps me on answering these question i would really appreciate it. Regards EDIT It has been a few months since i added this question. I ended up using the following code: #!/user/bin/php $start = microtime(true); set_time_limit(60); for (i = 0; i < 59; ++$i) { doMyThings(); time_sleep_until($start + $i + 1); } Thank you for this code guys! My cronjob is set to every minute. I have been running this for some time now in a test environment, and this works out great. It works really supperfast, and i see no increase in CPU nor Memory usage.

    Read the article

  • Renaming TurboGears 2's Repoze Fields with TGAdmin

    - by William Chambers
    I've been working on renaming TurboGears 2's Repoze 'groups' field to 'roles' to free the namespace and db tables for other purposes. Also roles makes much more sense to me then groups because I have a strong Drupal background. Now I have found some of the docs to do this such as these: http://www.turbogears.org/2.1/docs/main/Auth/Customization.html#customizing-the-model-structure-assumed-by-the-quickstart http://code.gustavonarea.net/repoze.what-quickstart/#customizing-the-model-definition However these only go part of the way. I have made (I'm pretty sure at least, I've double checked a few times.) all the changes required as you can see in this diff. This seems to work fine however I've ran into a rather major issue with the TurboGears Admin system. I've tried http://turbogears.org/2.0/docs/main/Extensions/Admin/index.html and it didn't seem to make any difference, however I'm not 100% sure I did it correctly. The problem occurs when I attempt to go to localhost/admin/permissions/. It causes a Internal Server Error and outputs the following error. http://pastebin.com/YWMH3SiU This error does not happen on the Roles/Users pages and the permissions /edit/1 also works. I'm running kubuntu 10.04 with TG 2.1b2. (I'm running the beta mostly for easier mako support which is really important.) Any help would be very appreciated.

    Read the article

  • Fix an external dependency of a ruby gem

    - by Patrick Daryll Glandien
    I am currently trying to install the gem nfoiled, which provides a ruby interface to ncurses. I do this by using gem install elliottcable-nfoiled as suggest in the README. Downloading it manually from the github repository and then installing it with rake install doesn't work because of a problem with the echoe-gem, thus I am bound to use the normal way. Unfortunately it depends on the gem ncurses-0.9.1 which is only compatible with ruby 1.8, and thus I can't install nfoiled either (since it always tries to compile ncurses-0.9.1 first): novavortex:/usr/src# gem install elliottcable-nfoiled Building native extensions. This could take a while... ... form_wrap.c: In function `rbncurs_m_new_form': form_wrap.c:395: error: `struct RArray' has no member named `len' form_wrap.c: In function `rbncurs_c_set_field_type': form_wrap.c:619: error: `struct RArray' has no member named `len' form_wrap.c: In function `rbncurs_c_set_form_fields': form_wrap.c:778: error: `struct RArray' has no member named `len' form_wrap.c: In function `make_arg': form_wrap.c:1126: error: `struct RArray' has no member named `len' make: *** [form_wrap.o] Error 1 Gem files will remain installed in /usr/local/lib/ruby/gems/1.9.1/gems/ncurses-0.9.1 for inspection. Results logged to /usr/local/lib/ruby/gems/1.9.1/gems/ncurses-0.9.1/gem_make.out novavortex:/usr/src# I managed to fix the problem in ncurses-0.9.1 (by replacing RARRAY(x)-len with RARRAY_LEN(x)) and to install it, but nfoiled still always tries to recompile it from a freshly downloaded source. How can I install nfoiled without having it recompile ncurses first?

    Read the article

  • How can I make this method more Scalalicious

    - by Neil Chambers
    I have a function that calculates the left and right node values for some collection of treeNodes given a simple node.id, node.parentId association. It's very simple and works well enough...but, well, I am wondering if there is a more idiomatic approach. Specifically is there a way to track the left/right values without using some externally tracked value but still keep the tasty recursion. /* * A tree node */ case class TreeNode(val id:String, val parentId: String){ var left: Int = 0 var right: Int = 0 } /* * a method to compute the left/right node values */ def walktree(node: TreeNode) = { /* * increment state for the inner function */ var c = 0 /* * A method to set the increment state */ def increment = { c+=1; c } // poo /* * the tasty inner method * treeNodes is a List[TreeNode] */ def walk(node: TreeNode): Unit = { node.left = increment /* * recurse on all direct descendants */ treeNodes filter( _.parentId == node.id) foreach (walk(_)) node.right = increment } walk(node) } walktree(someRootNode) Edit - The list of nodes is taken from a database. Pulling the nodes into a proper tree would take too much time. I am pulling a flat list into memory and all I have is an association via node id's as pertains to parents and children. Adding left/right node values allows me to get a snapshop of all children (and childrens children) with a single SQL query. The calculation needs to run very quickly in order to maintain data integrity should parent-child associations change (which they do very frequently). In addition to using the awesome Scala collections I've also boosted speed by using parallel processing for some pre/post filtering on the tree nodes. I wanted to find a more idiomatic way of tracking the left/right node values. After looking at the answers listed I have settled on this synthesised version: def walktree(node: TreeNode) = { def walk(node: TreeNode, counter: Int): Int = { node.left = counter node.right = treeNodes .filter( _.parentId == node.id) .foldLeft(counter+1) { (counter, curnode) => walk(curnode, counter) + 1 } node.right } walk(node,1) }

    Read the article

  • Issue in parsing the GridViewRows in a Telerik RadGridView

    - by cricketmovies
    Hi, I would like to do something similar what we do in ASP.NET where we parse through all the rows in a GridView and assign a particular value to a particular cell in a row which has a matching TaskId as the current Id. This has to happen in a Tick function of a Dispatcher Timer object. Since I have a Start Timer button Column for every row in a GridView. Upon a particular row's Start Timer Button click, I have to start its timer and display in a cell in that row. Similarly there can be multiple timers running in parallel. For this I need to be able to check the task Id of the particular task and keep updating the cell values with the updated time in all of the tasks that have a Timer Started. TimeSpan TimeRemaining = somevalue; string CurrentTaskId = "100"; foreach(GridViewRow row in RadGridView1.Rows) // Here I tried RadGridView1.ChildrenOfType() as well but it has null { if( (row.DataContext as Task).TaskId == CurrentTaskId ) row.Cells[2].Content = a.TaskTimeRemaining.ToString(); } Can someone please let me know how do I get this functionality using the Telerik RadGridView? Cheers, Syed.

    Read the article

  • Debian packaging of a Python package.

    - by chrisdew
    I need to write (or find) a script to create a Debian package (using python-support) from a Python package. The Python package will be pure Python (no C extensions). The Python package (for testing purposes) will just be a directory with an empty __init__.py file and a single Python module, package_test.py. The packaging script must use python-support to provide the correct bytecode for possible multiple installations of Python on a target platform. (i.e. v2.5 and v2.6 on Ubuntu Jaunty.) Most advice I find while googling are just examples nasty hacks that don't even use python-support or python-central. I have so far spent hours researching this, and the best I can come up with is to hack around the script from an existing open source project - but I don't know which bits are required for what I'm doing. Has anyone here made a Debian package out of a Python package in a reasonably non-hacky way? I'm starting to think that it will take me more than a week to go from no knowledge of Debian packaging and python-support to getting a working script. How long has it taken others? Any advice? Chris.

    Read the article

  • What is the best anti-crack scheme for your trial or subscription software?

    - by gmatt
    Writing code takes time and effort and just like any other human being we need to live by making an income (save for the few that are actually self sustainable.) Here are 3 general schemes to make a living: Independent developers can offer a trial then purchase scheme. An alternative is an open source base application with pay extensions. A last (probably least popular with customers) scheme is to enforce some kind of subscription. Then the price of the software pales in comparison to the long term subscription fees. So, my question would be a hypothetical one. Suppose that you invest thousands of hours into developing an application. Now suppose you can choose any one of the three options to make a living off this application--or any other option you want--and suppose you have a very real fear of loosing 80% of your revenue to a cracked version if one can be made. To be clear this application does not require the internet to perform all its useful functions, that is, your application is a prime candidate to be a cracked release on some website. Which option would you feel most comfortable with defending yourself against this possible situation and briefly describe why this option would be the best.

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >